{ "paper_id": "P16-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:58:13.179599Z" }, "title": "Transition-Based Neural Word Segmentation", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heilongjiang University", "location": { "settlement": "Harbin", "country": "China" } }, "email": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore University of Technology", "location": {} }, "email": "yuezhang@sutd.edu.sg" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Heilongjiang University", "location": { "settlement": "Harbin", "country": "China" } }, "email": "ghfu@hotmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Character-based and word-based methods are two main types of statistical models for Chinese word segmentation, the former exploiting sequence labeling models over characters and the latter typically exploiting a transition-based model, with the advantages that word-level features can be easily utilized. Neural models have been exploited for character-based Chinese word segmentation, giving high accuracies by making use of external character embeddings, yet requiring less feature engineering. In this paper, we study a neural model for word-based Chinese word segmentation, by replacing the manuallydesigned discrete features with neural features in a word-based segmentation framework. Experimental results demonstrate that word features lead to comparable performances to the best systems in the literature, and a further combination of discrete and neural features gives top accuracies.", "pdf_parse": { "paper_id": "P16-1040", "_pdf_hash": "", "abstract": [ { "text": "Character-based and word-based methods are two main types of statistical models for Chinese word segmentation, the former exploiting sequence labeling models over characters and the latter typically exploiting a transition-based model, with the advantages that word-level features can be easily utilized. Neural models have been exploited for character-based Chinese word segmentation, giving high accuracies by making use of external character embeddings, yet requiring less feature engineering. In this paper, we study a neural model for word-based Chinese word segmentation, by replacing the manuallydesigned discrete features with neural features in a word-based segmentation framework. Experimental results demonstrate that word features lead to comparable performances to the best systems in the literature, and a further combination of discrete and neural features gives top accuracies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical word segmentation methods can be categorized character-based (Xue, 2003; Tseng et al., 2005) and word-based (Andrew, 2006; Zhang and Clark, 2007) approaches. The former casts word segmentation as a sequence labeling problem, using segmentation tags on characters to mark their relative positions inside words. The latter, in contrast, ranks candidate segmented outputs directly, extracting both character and full-word features.", "cite_spans": [ { "start": 73, "end": 84, "text": "(Xue, 2003;", "ref_id": "BIBREF35" }, { "start": 85, "end": 104, "text": "Tseng et al., 2005)", "ref_id": "BIBREF27" }, { "start": 120, "end": 134, "text": "(Andrew, 2006;", "ref_id": "BIBREF1" }, { "start": 135, "end": 157, "text": "Zhang and Clark, 2007)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An influential character-based word segmentation model (Peng et al., 2004; Tseng et al., 2005) uses B/I/E/S labels to mark a character as the beginning, internal (neither beginning nor end), end and only-character (both beginning and end) of a character-based word-based discrete Peng et al. (2004) Andrew (2006 ( ) Tseng et al. (2005 Zhang and Clark (2007) neural Zheng et al. (2013) this work Chen et al. (2015b) Figure 1: Word segmentation methods.", "cite_spans": [ { "start": 55, "end": 74, "text": "(Peng et al., 2004;", "ref_id": "BIBREF21" }, { "start": 75, "end": 94, "text": "Tseng et al., 2005)", "ref_id": "BIBREF27" }, { "start": 280, "end": 298, "text": "Peng et al. (2004)", "ref_id": "BIBREF21" }, { "start": 306, "end": 311, "text": "(2006", "ref_id": "BIBREF1" }, { "start": 312, "end": 334, "text": "( ) Tseng et al. (2005", "ref_id": "BIBREF27" }, { "start": 335, "end": 357, "text": "Zhang and Clark (2007)", "ref_id": "BIBREF38" }, { "start": 365, "end": 384, "text": "Zheng et al. (2013)", "ref_id": "BIBREF47" }, { "start": 395, "end": 414, "text": "Chen et al. (2015b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "word, respectively, employing conditional random field (CRF) to model the correspondence between the input character sequence and output label sequence. For each character, features are extracted from a five-character context window and a twolabel history window. Subsequent work explores different label sets (Zhao et al., 2006) , feature sets (Shi and Wang, 2007) and semi-supervised learning (Sun and Xu, 2011) , reporting state-of-the-art accuracies.", "cite_spans": [ { "start": 310, "end": 329, "text": "(Zhao et al., 2006)", "ref_id": "BIBREF45" }, { "start": 345, "end": 365, "text": "(Shi and Wang, 2007)", "ref_id": "BIBREF22" }, { "start": 395, "end": 413, "text": "(Sun and Xu, 2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, neural network models have been investigated for the character tagging approach. The main idea is to replace manual discrete features with automatic real-valued features, which are derived automatically from distributed character representations using neural networks. In particular, convolution neural network 1 (Zheng et al., 2013) , tensor neural network (Pei et al., 2014) , recursive neural network (Chen et al., 2015a) and longshort-term-memory (LSTM) (Chen et al., 2015b) have been used to derive neural feature representations from input word sequences, which are fed into a CRF inference layer.", "cite_spans": [ { "start": 323, "end": 343, "text": "(Zheng et al., 2013)", "ref_id": "BIBREF47" }, { "start": 368, "end": 386, "text": "(Pei et al., 2014)", "ref_id": "BIBREF20" }, { "start": 414, "end": 434, "text": "(Chen et al., 2015a)", "ref_id": "BIBREF4" }, { "start": 468, "end": 488, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we investigate the effectiveness of word embedding features for neural network segmentation using transition-based models. Since it is challenging to integrate word features to the CRF inference framework of the existing step action buffer( character-based methods, we take inspiration from word-based discrete segmentation instead. In particular, we follow Zhang and Clark (2007) , using the transition-based framework to decode a sentence from left-to-right incrementally, scoring partially segmented results using both character-level and word-level features. Beam-search is applied to reduce error propagation and large-margin training with early-update (Collins and Roark, 2004) is used for learning from inexact search. We replace the discrete word and character features of Zhang and Clark (2007) with word and character embeddings, respectively, and change their linear model into a deep neural network. Following Zheng et al. (2013) and Chen et al. (2015b) , we use convolution neural networks to achieve local feature combination and LSTM to learn global sentence-level features, respectively. The resulting model is a word-based neural segmenter that can leverage rich embedding features. Its correlation with existing work on Chinese segmentation is shown in Figure 1 .", "cite_spans": [ { "start": 373, "end": 395, "text": "Zhang and Clark (2007)", "ref_id": "BIBREF38" }, { "start": 673, "end": 698, "text": "(Collins and Roark, 2004)", "ref_id": "BIBREF6" }, { "start": 796, "end": 818, "text": "Zhang and Clark (2007)", "ref_id": "BIBREF38" }, { "start": 937, "end": 956, "text": "Zheng et al. (2013)", "ref_id": "BIBREF47" }, { "start": 961, "end": 980, "text": "Chen et al. (2015b)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1286, "end": 1294, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 \u2022 \u2022 w\u22121w0) queue(c0c1 \u2022 \u2022 \u2022 ) 0 - \u03c6 \u4e2d \u56fd \u2022 \u2022 \u2022 1 SEP \u4e2d \u56fd \u5916 \u2022 \u2022 \u2022 2 APP \u4e2d\u56fd \u5916 \u4f01 \u2022 \u2022 \u2022 3 SEP \u4e2d\u56fd \u5916 \u4f01 \u4e1a \u2022 \u2022 \u2022 4 APP \u4e2d\u56fd \u5916\u4f01 \u4e1a \u52a1 \u2022 \u2022 \u2022 5 SEP \u4e2d\u56fd \u5916\u4f01 \u4e1a \u52a1 \u53d1 \u2022 \u2022 \u2022 6 APP \u4e2d\u56fd \u5916\u4f01 \u4e1a\u52a1 \u53d1 \u5c55 \u2022 \u2022 \u2022 7 SEP \u2022 \u2022 \u2022 \u4e1a\u52a1 \u53d1 \u5c55 \u8fc5 \u901f 8 APP \u2022 \u2022 \u2022 \u4e1a\u52a1 \u53d1\u5c55 \u8fc5 \u901f 9 SEP \u2022 \u2022 \u2022 \u53d1\u5c55 \u8fc5 \u901f 10 APP \u2022 \u2022 \u2022 \u53d1\u5c55 \u8fc5\u901f \u03c6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Results on standard benchmark datasets show the effectiveness of word embedding features for neural segmentation. Our method achieves stateof-the-art results without any preprocess based on external knowledge such as Chinese idioms of Chen et al. (2015a) and Chen et al. (2015b) . We release our code under GPL for research reference. 2", "cite_spans": [ { "start": 235, "end": 254, "text": "Chen et al. (2015a)", "ref_id": "BIBREF4" }, { "start": 259, "end": 278, "text": "Chen et al. (2015b)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We exploit the word-based segmentor of Zhang and Clark (2011) as the baseline system. It incrementally segments a sentence using a transition system, with a state holding a partially-segmented sentence in a buffer s and ordering the next incoming characters in a queue q. Given an input Chinese sentence, the buffer is initially empty and the queue contains all characters of the sentence, a sequence of transition actions are used to consume characters in the queue and build the output sentence in the buffer. The actions include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "\u2022 Append (APP), which removes the first character from the queue, and appends it to the last word in the buffer;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "\u2022 Separate (SEP), which moves the first character of the queue onto the buffer as a new (sub) word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "Given the input sequence of characters \"\u4e2d\u56fd \u5916 \u4f01 \u4e1a \u52a1 \u53d1 \u5c55 \u8fc5 \u901f\" (The business of foreign company in China develops quickly), the correct output can be derived using action sequence \"SEP APP SEP APP SEP APP SEP APP SEP APP\", as shown in Figure 2 . Search. Based on the transition system, the decoder searches for an optimal action sequence for a given sentence. Denote an action sequence as A = a 1 \u2022 \u2022 \u2022 a n . We define the score of A as the total score of all actions in the sequence, which is computed by:", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 240, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "score(A) = a\u2208A score(a) = a\u2208A w \u2022 f (s, q, a),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "where w is the model parameters, f is a feature extraction function, s and q are the buffer and queue of a certain state before the action a is applied. The feature templates are shown in Table 1 , which are the same as Zhang and Clark (2011) . These base features include three main source of information. First, characters in the front of the queue and the end of the buffer are used for scoring both separate and append actions (e.g. c 0 ). Second, words that are identified are used to guide separate actions (e.g. w 0 ). Third, relevant information of identified words, such as their lengths and first/last characters are utilized for additional features (e.g. len(w \u22121 )).", "cite_spans": [ { "start": 220, "end": 242, "text": "Zhang and Clark (2011)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "We follow Zhang and Clark (2011) in using beam-search for decoding, shown in Algorith 1, where \u0398 is the set of model parameters. Initially the beam contains only the initial state. At each step, each state in the beam is extended by applying both SEP and APP, resulting in a set of new states, which are scored and ranked. The top B are", "cite_spans": [ { "start": 10, "end": 32, "text": "Zhang and Clark (2011)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "Feature templates Action c\u22121c0 APP, SEP w\u22121, w\u22121w\u22122, w\u22121c0, w\u22122len(w\u22121) SEP start(w\u22121)c0, end(w\u22121)c0 start(w\u22121)end(w\u22121), end(w\u22122)end(w\u22121) w\u22122len(w\u22121), len(w\u22122)w\u22121 w\u22121, where len(w\u22121) = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "Table 1: Feature templates for the baseline model, where w i denotes the word in the buffer, c i denotes the character in the queue, as shown in Figure 2, start(.), end(.) and len(.) denote the first, last character and length of a word, respectively.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 151, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "Algorithm 1 Beam-search decoding. function DECODE(c 1 \u2022 \u2022 \u2022 c n , \u0398) agenda \u2190 { (\u03c6, c 1 \u2022 \u2022 \u2022 c n , score=0.0) } for i in 1 \u2022 \u2022 \u2022 n beam \u2190 { } for cand in agenda new \u2190 SEP(cand, c i , \u0398) ADDITEM(beam, new) new \u2190 APP(cand, c i , \u0398) ADDITEM(beam, new) agenda \u2190 TOP-B(beam, B) best \u2190 BESTITEM(agenda) w 1 \u2022 \u2022 \u2022 w m \u2190 EXTRACTWORDS(best)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "used as the beam states for the next step. The same process replaces until all input character are processed, and the highest-scored state in the beam is taken for output. Online leaning with max-margin is used, which is given in section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Discrete Model", "sec_num": "2" }, { "text": "We use a neural network model to replace the discrete linear model for scoring transition action sequences. For better comparison between discrete and neural features, the overall segmentation framework of the baseline is used, which includes the incremental segmentation process, the beamsearch decoder and the training process integrated with beam-search (Zhang and Clark, 2011) . In addition, the neural network scorer takes the similar feature sources as the baseline, which includes character information over the input, word information of the partially constructed output, and the history sequence of the actions that have been applied so far.", "cite_spans": [ { "start": 357, "end": 380, "text": "(Zhang and Clark, 2011)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "The overall architecture of the neural scorer is shown in Figure 3 . Given a certain state score(SEP) score(APP)", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "\u2022 \u2022 \u2022 h sep \u2022 \u2022 \u2022 h app \u2022 \u2022 \u2022 r c \u2022 \u2022 \u2022 r w \u2022 \u2022 \u2022 r a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "word sequence character sequence action sequence configuration (s, q), we use three separate recurrent neural networks (RNN) to model the word sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "RNN RNN RNN \u2022 \u2022 \u2022 w \u22121 w 0 \u2022 \u2022 \u2022 c \u22121 c 0 c 1 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 a \u22121 a 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "\u2022 \u2022 \u2022 w \u22121 w 0 , the character se- quence \u2022 \u2022 \u2022 c \u22121 c 0 c 1 \u2022 \u2022 \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": ", and the action sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "\u2022 \u2022 \u2022 a \u22121 a 0 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "respectively, resulting in three dense real-valued vectors {r w , r c and r a }, respectively. All the three feature vectors are used scoring the SEP action. For APP, on the other hand, we use only the character and action features r c and r a because the last word w 0 in the buffer is a partial word. Formally, given r w , r c , r a , the action scores are computed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "score(SEP) = w sep h sep score(APP) = w app h app where h sep = tanh(W sep [r w , r c , r a ] + b sep ) h app = tanh(W app [r c , r a ] + b app ) W sep , W app , b sep , b app , w sep , w app are model pa- rameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "The neural networks take the embedding forms of words, characters and actions as input, for extracting r w , r c and r a , respectively. We exploit the LSTM-RNN structure (Hochreiter and Schmidhuber, 1997) , which can better capture non-local syntactic and semantic information from a sequential input, yet reducing gradient explosion or diminishing during training.", "cite_spans": [ { "start": 171, "end": 205, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "In general, given a sequence of input vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "x 0 \u2022 \u2022 \u2022 x n , the LSTM-RNN computes a sequence of hidden vectors h 0 \u2022 \u2022 \u2022 h n ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "respectively, with each h i being determined by the input x i and the previous hidden vector h i\u22121 . A cell structure ce is used to carry long-term memory information over the history h 0 \u2022 \u2022 \u2022 h i for calculating h i , and information flow is controlled by an input gate ig, an output gate og and a forget gate fg. Formally, the calculation of h i using h i\u22121 and x i is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "... w i ... w i\u22121 ...... ... x w i ...... (a) word representation ... a i ... a i\u22121 ...... ... x a i ...... (b) action representation ... \u2295 ... c i , c i\u22121 c i ... \u2295 ... c i\u22121 , c i\u22122 c i\u22121 ... \u2295 ... c i+1 , c i+1 c i ...... ...... ... x c i ...... ......", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "ig i = \u03c3(W ig x i + U ig h i\u22121 + V ig ce i\u22121 + b ig ) fg i = \u03c3(W f g x i + U f g h i\u22121 + V f g ce i\u22121 + b f g ) ce i = fg i ce i\u22121 + ig i tanh(W ce x i + U ce h i\u22121 + b ce ) og i = \u03c3(W og x i + U og h i\u22121 + V og ce i + b og ) h i = og i tanh(ce i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "where U, V, W, b are model parameters, and denotes Hadamard product. When used to calculate r w , r c and r a , the general LSTM structure above is given different input sequences x 0 \u2022 \u2022 \u2022 x n , according to the word, character and action sequences, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition-Based Neural Model", "sec_num": "3" }, { "text": "Words. Given a word w, we use a looking-up matrix E w to obtain its embedding e w (w). The matrix can be obtained through pre-training on large size of auto segmented corpus. As shown in Figure 4(a) , we use a convolutional neural layer upon a two-word window to obtain \u2022 \u2022 \u2022 x w \u22121 x w 0 for the LSTM for r w , with the following formula:", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 198, "text": "Figure 4(a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "x w i = tanh W w [e w (w i\u22121 ), e w (w i )] + b w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "Actions. We represent an action a with an embedding e a (a) from a looking-up table E a , and apply the similar convolutional neural network to Given the input action sequence \u2022 \u2022 \u2022 a \u22121 a 0 , the x a i is computed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "obtain \u2022 \u2022 \u2022 x a \u22121 x a 0 for r a ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "x a i = tanh W a [e a (a i\u22121 ), e a (a i )] + b a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "Characters. We make embeddings for both character unigrams and bigrams by looking-up matrixes E c and E bc , respectively, the latter being shown to be useful by Pei et al. (2014) . For each character c i , the unigram embedding e c (c i ) and the bigram embedding e bc (c i\u22121 c i ) are concatenated, before being given to a CNN with a convolution size of 5. For the character sequence", "cite_spans": [ { "start": 162, "end": 179, "text": "Pei et al. (2014)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "\u2022 \u2022 \u2022 c \u22121 c 0 c 1 \u2022 \u2022 \u2022 of a given state (s, q), we compute its input vectors \u2022 \u2022 \u2022 x c \u22121 x c 0 x c 1 \u2022 \u2022 \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "for the LSTM for r c by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "x c i = tanh W c [e c (c i\u22122 ) \u2295 e bc (c i\u22123 c i\u22122 ), \u2022 \u2022 \u2022 , e c (c i ) \u2295 e bc (c i\u22121 c i ), \u2022 \u2022 \u2022 , e c (c i+2 ) \u2295 e bc (c i+1 c i+2 )] + b c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "For all the above input representations, the looking-up tables E w , E a , E c , E bc and the weights W w , W a , W c , b w , b a , b c are model parameters. For calculating r w and r a , we apply the LSTMs directly over the sequences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "\u2022 \u2022 \u2022 x w \u22121 x w 0 and \u2022 \u2022 \u2022 x a \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "x a 0 for words and actions, and use the outputs h w 0 and h a 0 for r w and r a , respectively. For calculating r c , we further use a bi-directional extension of the original LSTM structure. In particular, the base LSTM is applied to the input character sequence both from left to right and from right to left, leading to two hidden node sequences", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "\u2022 \u2022 \u2022 h cL \u22121 h cL 0 h cL 1 \u2022 \u2022 \u2022 and \u2022 \u2022 \u2022 h cR \u22121 h cR 0 h cR 1 \u2022 \u2022 \u2022 , re- spectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "For the current character c 0 , h cL 0 and h cR 0 are concatenated to form the final vector r c . This is feasible because the character sequence is input and static, and previous work has demonstrated better capability of bi-directional LSTM for modeling sequences (Yao and Zweig, 2015) .", "cite_spans": [ { "start": 266, "end": 287, "text": "(Yao and Zweig, 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Input representation", "sec_num": "3.1" }, { "text": "Our model can be extended by integrating the baseline discrete features into the feature layer. In particular,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "score(SEP) = w sep (h sep \u2295 f sep ) score(APP) = w app (h app \u2295 f app ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "where f sep and f app represent the baseline sparse vector for SEP and APP features, respectively, and \u2295 denotes the vector concatenation operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "Algorithm 2 Max-margin training with earlyupdate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "function TRAIN(c 1 \u2022 \u2022 \u2022 c n , a g 1 \u2022 \u2022 \u2022 a g n , \u0398) agenda \u2190 { (\u03c6, c 1 \u2022 \u2022 \u2022 c n , score=0.0) } for i in 1 \u2022 \u2022 \u2022 n beam \u2190 { } for cand in agenda new \u2190 SEP(cand, c i , \u0398) if {a g i = SEP} new.score += \u03b7 ADDITEM(beam, new) new \u2190 APP(cand, c i , \u0398) if {a g i = APP} new.score += \u03b7 ADDITEM(beam, new) agenda \u2190 TOP-B(beam, B) if {ITEM(a g 1 \u2022 \u2022 \u2022 a g i ) / \u2208 agenda} \u0398 = \u0398 \u2212 f BESTITEM(agenda) \u0398 = \u0398 + f ITEM((a g 1 \u2022 \u2022 \u2022 a g i ) return if {ITEM(a g 1 \u2022 \u2022 \u2022 a g n ) = BESTITEM(agenda)} \u0398 = \u0398 \u2212 f BESTITEM(agenda) \u0398 = \u0398 + f ITEM((a g 1 \u2022 \u2022 \u2022 a g n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "4 Training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "To train model parameters for both the discrete and neural models, we exploit online learning with early-update as shown in Algorithm 2. A maxmargin objective is exploited, 3 which is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "L(\u0398) = 1 K K k=1 l(A g k , \u0398) + \u03bb 2 \u0398 2 l(A g k , \u0398) = max A score(A k , \u0398) + \u03b7 \u2022 \u03b4(A k , A g k ) \u2212 score(A g k , \u0398),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "where \u0398 is the set of all parameters, {A g k } K n=1 are gold action sequences to segment the training corpus, A k is the model output action sequence, \u03bb is a regularization parameter and \u03b7 is used to tune the loss margins.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "For the discrete models, f (\u2022) denotes the features extracted according to the feature templates in Table 1 . For the neural models, f (\u2022) denotes the corresponding h sep and h app . Thus only the output layer is updated, and we further use backpropagation to learn the parameters of the other layers (LeCun et al., 2012) . We use online Ada- 3 Zhou et al. (2015) find that max-margin training did not yield reasonable results for neural transition-based parsing, which is different from our findings. One likely reason is that when the number of labels is small max-margin is effective.", "cite_spans": [ { "start": 301, "end": 321, "text": "(LeCun et al., 2012)", "ref_id": "BIBREF14" }, { "start": 343, "end": 344, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "3.2" }, { "text": "PKU Table 3 : Hyper-parameter values in our model.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "CTB60", "sec_num": null }, { "text": "MSR Training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CTB60", "sec_num": null }, { "text": "Grad (Duchi et al., 2011) to minimize the objective function for both the discrete and neural models. All the matrix and vector parameters are initialized by uniform sampling in (\u22120.01, 0.01).", "cite_spans": [ { "start": 5, "end": 25, "text": "(Duchi et al., 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "CTB60", "sec_num": null }, { "text": "Data. We use three datasets for evaluation, namely CTB6, PKU and MSR. The CTB6 corpus is taken from Chinese Treebank 6.0, and the PKU and MSR corpora can be obtained from Bake-Off 2005 (Emerson, 2005 . We follow Zhang et al. (2014) , splitting the CTB6 corpus into training, development and testing sections. For the PKU and MSR corpora, only the training and test datasets are specified and we randomly split 10% of the training sections for development. Table 1 shows the overall statistics of the three datasets. Embeddings. We use word2vec 4 to pre-train word, character and bi-character embeddings on Chinese Gigaword corpus (LDC2011T13). In order to train full word embeddings, the corpus is segmented automatically by our baseline model. Hyper-parameters. The hyper-parameter values are tuned according to development performances. We list their final values in Table 3 .", "cite_spans": [ { "start": 171, "end": 184, "text": "Bake-Off 2005", "ref_id": null }, { "start": 185, "end": 199, "text": "(Emerson, 2005", "ref_id": "BIBREF11" }, { "start": 212, "end": 231, "text": "Zhang et al. (2014)", "ref_id": "BIBREF44" } ], "ref_spans": [ { "start": 456, "end": 463, "text": "Table 1", "ref_id": null }, { "start": 869, "end": 876, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.1" }, { "text": "To better understand the word-based neural models, we perform several development experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Development Results", "sec_num": "5.2" }, { "text": "All the experiments in this section are conducted on the CTB6 development dataset. (c) neural(+tune) Figure 5 : Accuracies against the training epoch using beam sizes 1, 2, 4, 8 and 16, respectively.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Development Results", "sec_num": "5.2" }, { "text": "We study the influence of beam size on the baseline and neural models. Our neural model has two choices of using pre-trained word embeddings. We can either fine-tune or fix the embeddings during training. In case of fine-tuning, only words in the training data can be learned, while embeddings of out-of-vocabulary (OOV) words could not be used effectively. 5 In addition, following Dyer et al. 2015we randomly set words with frequency 1 in the training data as the OOV words in order to learn the OOV embedding, while avoiding overfitting. If the pretrained word embeddings are not fine-tuned, we can utilize all word embeddings. Figure 5 shows the development results, where the training curve of the discrete baseline is shown in Figure 5 (a) and the curve of the neural model without and with fine tuning are shown in 5(b) and 5(c), respectively. The performance increases with a larger beam size in all settings. When the beam increases into 16, the gains levels out. The results of the discrete model and the neural model without fine-tuning are highly similar, showing the usefulness of beam-search.", "cite_spans": [ { "start": 358, "end": 359, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 631, "end": 639, "text": "Figure 5", "ref_id": null }, { "start": 733, "end": 741, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Embeddings and beam size", "sec_num": "5.2.1" }, { "text": "On the other hand, with fine-tuning, the results are different. The model with beam size 1 gives better accuracies compared to the other models with the same beam size. However, as the beam size increases, the performance increases very little. The results are consistent with Dyer et al. (2015) , who find that beam-search improves the results only slightly on dependency parsing. When a beam size of 16 is used, this model performs the 5 We perform experiments using random initialized word embeddings as well when fine-tune is used, which is a fully supervised model. The performance is slightly lower. Figure 6 : Sentence accuracy comparisons for the discrete and the neural models.", "cite_spans": [ { "start": 277, "end": 295, "text": "Dyer et al. (2015)", "ref_id": "BIBREF10" }, { "start": 438, "end": 439, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 606, "end": 614, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Embeddings and beam size", "sec_num": "5.2.1" }, { "text": "worst compared with the discrete model and the neural model without fine-tuning. This is likely because the fine-tuning of embeddings leads to overfitting of in-vocabulary words, and underfitting over OOV words. Based on the observation, we exploit fixed word embeddings in our final models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embeddings and beam size", "sec_num": "5.2.1" }, { "text": "We conduct feature ablation experiments to study the effects of the word, character unigram, character bigram and action features to the neural model. The results are shown in Table 4 . Word features are particularly important to the model, without which the performance decreases by 4.5%. The effects of the character unigram, bigram and action features are relatively much weaker. 6 This demonstrates that in the word-based incremental search framework, words are the most crucial information to the neural model.", "cite_spans": [ { "start": 383, "end": 384, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Feature ablation", "sec_num": "5.2.2" }, { "text": "Prior work has shown the effectiveness of integrating discrete and neural features for several NLP tasks (Turian et al., Durrett and Klein, 2015; . We investigate the usefulness of such integration to our word-based segmentor on the development dataset. We study it by two ways. First, we compare the error distributions between the discrete and the neural models. Intuitively, different error distributions are necessary for improvements by integration. We draw a scatter graph to show their differences, with the (x, y) values of each point denoting the F-measure scores of the two models with respect to sentences, respectively. As shown in Figure 6 , the points are rather dispersive, showing the differences of the two models.", "cite_spans": [ { "start": 105, "end": 120, "text": "(Turian et al.,", "ref_id": null }, { "start": 121, "end": 145, "text": "Durrett and Klein, 2015;", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 644, "end": 652, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "5.2.3" }, { "text": "Further, we directly look at the results after integration of both discrete and neural features. As shown in Table 4 , the integrated model improves the accuracies from 95.45% to 96.30%, demonstrating that the automatically-induced neural features contain highly complementary information to the manual discrete features. Table 6 shows the final results on CTB6 test dataset. For thorough comparison, we implement discrete, neural and combined character-based models as well. 7 In particular, the character-based discrete model is a CRF tagging model using character unigrams, bigrams, trigrams and tag transitions (Tseng et al., 2005) , and the character-based neural model exploits a bi-directional LSTM layer to model character sequences 8 and a CRF layer for 201495.2 97.2 Zhang et al. (2013a) 96.1 97.5 Sun et al. (2012) 95.4 97.4 Zhang and Clark (2011) 95.1 97.1 Sun (2010) 95.2 96.9 Sun et al. (2009) 95.2 97.3 Table 6 : Main results on PKU and MSR test datasets.", "cite_spans": [ { "start": 476, "end": 477, "text": "7", "ref_id": null }, { "start": 615, "end": 635, "text": "(Tseng et al., 2005)", "ref_id": "BIBREF27" }, { "start": 777, "end": 797, "text": "Zhang et al. (2013a)", "ref_id": "BIBREF42" }, { "start": 808, "end": 825, "text": "Sun et al. (2012)", "ref_id": "BIBREF25" }, { "start": 869, "end": 879, "text": "Sun (2010)", "ref_id": "BIBREF26" }, { "start": 890, "end": 907, "text": "Sun et al. (2009)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 322, "end": 329, "text": "Table 6", "ref_id": null }, { "start": 918, "end": 925, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Integrating discrete features", "sec_num": "5.2.3" }, { "text": "output (Chen et al., 2015b) . 9 The combined model uses the same method for integrating discrete and neural features as our word-based model. The word-based models achieve better performances than character-based models, since our model can exploit additional word information learnt from large auto-segmented corpus. We also compare the results with other models. Wang et al. (2011) is a semi-supervised model that exploits word statistics from auto-segmented raw corpus, which is similar with our combined model in using semi-supervised word information. We achieve slightly better accuracies. Zhang et al. (2014) is a joint segmentation, POS-tagging and dependency parsing model, which can exploit syntactic information.", "cite_spans": [ { "start": 7, "end": 27, "text": "(Chen et al., 2015b)", "ref_id": "BIBREF5" }, { "start": 365, "end": 383, "text": "Wang et al. (2011)", "ref_id": "BIBREF30" }, { "start": 596, "end": 615, "text": "Zhang et al. (2014)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "5.3" }, { "text": "To compare our models with other state-of-theart models in the literature, we report the performance on the PKU and MSR datasets also. 10 Our combined model gives the best result on the MSR dataset, and the second best on PKU. The method of Zhang et al. (2013a) gives the best performance on PKU by co-training on large-scale data.", "cite_spans": [ { "start": 241, "end": 261, "text": "Zhang et al. (2013a)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Final Results", "sec_num": "5.3" }, { "text": "To study the differences between word-based and character-based neural models, we conduct error analysis on the test dataset of CTB60. First, we examine the error distribution on individual sentences. Figure 7 shows the F-measure values of each test sentence by word-and characterbased neural models, respectively, where the xaxis value denotes the F-measure value of the word-based neural model, and the y-axis value denotes its performance of the character-based neural model. We can see that the majority scatter points are off the diagonal line, demonstrating strong differences between the two models. This results from the differences in feature sources. Second, we study the F-measure distribution of the two neural models with respect to sentence lengths. We divide the test sentences into ten bins, with bin i denoting sentence lengths in [5 * (i \u2212 1), 5 * i]. Figure 8 shows the results. According to the figure, we observe that word-based neural model is relatively weaker for sentences with length in [5, 10] , while can better tackle long sentences.", "cite_spans": [ { "start": 1013, "end": 1016, "text": "[5,", "ref_id": null }, { "start": 1017, "end": 1020, "text": "10]", "ref_id": null } ], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 7", "ref_id": null }, { "start": 870, "end": 878, "text": "Figure 8", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Third, we compare the two neural models by their capabilities of modeling words with different lengths. Figure 9 shows the results. The perfor- mances are lower for words with lengths beyond 2, and the performance drops significantly for words with lengths over 3. Overall, the word-based neural model achieves comparable performances with the character-based model, but gives significantly better performances for long words, in particular when the word length is over 3. This demonstrates the advantage of word-level features.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 112, "text": "Figure 9", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "6 Related Work Xue (2003) was the first to propose a charactertagging method to Chinese word segmentation, using a maximum entropy model to assign B/I/E/S tags to each character in the input sentence separately. Peng et al. (2004) showed that better results can be achieved by global learning using a CRF model. This method has been followed by most subsequent models in the literature (Tseng et al., 2005; Zhao, 2009; Sun et al., 2012) . The most effective features have been character unigrams, bigrams and trigrams within a five-character window, and a bigram tag window. Special characters such as alphabets, numbers and date/time characters are also differentiated for extracting features. Zheng et al. (2013) built a neural network segmentor, which essentially substitutes the manual discrete features of Peng et al. (2004) , with dense real-valued features induced automatically from character embeddings, using a deep neural network structure (Collobert et al., 2011) . A tag transition matrix is used for inference, which makes the model effectively. Most subsequent work on neural segmentation followed this method, improving the extraction of emission features by using more complex neural network structures. Mansur et al. (2013) experimented with embeddings of richer features, and in particular charac-ter bigrams. Pei et al. (2014) used a tensor neural network to achieve extensive feature combinations, capturing the interaction between characters and tags. Chen et al. (2015a) used a recursive network structure to the same end, extracting more combined features to model complicated character combinations in a five-character window. Chen et al. (2015b) used a LSTM model to capture long-range dependencies between characters in a sentence. Xu and Sun (2016) proposed a dependency-based gated recursive neural network to efficiently integrate local and long-distance features. The above methods are all character-based models, making no use of full word information. In contrast, we leverage both character embeddings and word embeddings for better accuracies.", "cite_spans": [ { "start": 15, "end": 25, "text": "Xue (2003)", "ref_id": "BIBREF35" }, { "start": 212, "end": 230, "text": "Peng et al. (2004)", "ref_id": "BIBREF21" }, { "start": 386, "end": 406, "text": "(Tseng et al., 2005;", "ref_id": "BIBREF27" }, { "start": 407, "end": 418, "text": "Zhao, 2009;", "ref_id": "BIBREF46" }, { "start": 419, "end": 436, "text": "Sun et al., 2012)", "ref_id": "BIBREF25" }, { "start": 695, "end": 714, "text": "Zheng et al. (2013)", "ref_id": "BIBREF47" }, { "start": 811, "end": 829, "text": "Peng et al. (2004)", "ref_id": "BIBREF21" }, { "start": 951, "end": 975, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF7" }, { "start": 1221, "end": 1241, "text": "Mansur et al. (2013)", "ref_id": "BIBREF18" }, { "start": 1329, "end": 1346, "text": "Pei et al. (2014)", "ref_id": "BIBREF20" }, { "start": 1474, "end": 1493, "text": "Chen et al. (2015a)", "ref_id": "BIBREF4" }, { "start": 1652, "end": 1671, "text": "Chen et al. (2015b)", "ref_id": "BIBREF5" }, { "start": 1759, "end": 1776, "text": "Xu and Sun (2016)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "For word-based segmentation, Andrew (2006) used a semi-CRF model to integrate word features, Zhang and Clark (2007) used a perceptron algorithm with inexact search, and Sun et al. (2009) used a discriminative latent variable model to make use of word features. Recently, there have been several neural-based models using word-level embedding features (Morita et al., 2015; Liu et al., 2016; Cai and Zhao, 2016) , which are different from our work in the basic framework. For instance, Liu et al. (2016) follow Andrew (2006) using a semi-CRF for structured inference.", "cite_spans": [ { "start": 29, "end": 42, "text": "Andrew (2006)", "ref_id": "BIBREF1" }, { "start": 93, "end": 115, "text": "Zhang and Clark (2007)", "ref_id": "BIBREF38" }, { "start": 169, "end": 186, "text": "Sun et al. (2009)", "ref_id": "BIBREF24" }, { "start": 351, "end": 372, "text": "(Morita et al., 2015;", "ref_id": "BIBREF19" }, { "start": 373, "end": 390, "text": "Liu et al., 2016;", "ref_id": "BIBREF16" }, { "start": 391, "end": 410, "text": "Cai and Zhao, 2016)", "ref_id": "BIBREF3" }, { "start": 485, "end": 502, "text": "Liu et al. (2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "We followed the global learning and beamsearch framework of Zhang and Clark (2011) in building a word-based neural segmentor. The main difference between our model and that of Zhang and Clark (2011) is that we use a neural network to induce feature combinations directly from character and word embeddings. In addition, the use of a bi-directional LSTM allows us to leverage non-local information from the word sequence, and look-ahead information from the incoming character sequence. The automatic neural features are complementary to the manual discrete features of Zhang and Clark (2011) . We show that our model can accommodate the integration of both types of features. This is similar in spirit to the work of Sun (2010) and Wang et al. (2014) , who integrated features of character-based and word-based segmentors.", "cite_spans": [ { "start": 60, "end": 82, "text": "Zhang and Clark (2011)", "ref_id": "BIBREF39" }, { "start": 176, "end": 198, "text": "Zhang and Clark (2011)", "ref_id": "BIBREF39" }, { "start": 569, "end": 591, "text": "Zhang and Clark (2011)", "ref_id": "BIBREF39" }, { "start": 717, "end": 727, "text": "Sun (2010)", "ref_id": "BIBREF26" }, { "start": 732, "end": 750, "text": "Wang et al. (2014)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "Transition-based framework with beam search has been widely exploited in a number of other NLP tasks, including syntactic parsing (Zhang and Nivre, 2011; Zhu et al., 2013) , information ex-traction (Li and Ji, 2014) and the work of joint models (Zhang et al., 2013b; Zhang et al., 2014) . Recently, the effectiveness of neural features has been studied for this framework. In the natural language parsing community, it has achieved great success. Representative work includes Zhou et al. (2015) , Weiss et al. (2015) , Watanabe and Sumita (2015) and Andor et al. (2016) . In this work, we apply the transition-based neural framework to Chinese segmentation, in order to exploit wordlevel neural features such as word embeddings.", "cite_spans": [ { "start": 130, "end": 153, "text": "(Zhang and Nivre, 2011;", "ref_id": "BIBREF40" }, { "start": 154, "end": 171, "text": "Zhu et al., 2013)", "ref_id": "BIBREF49" }, { "start": 198, "end": 215, "text": "(Li and Ji, 2014)", "ref_id": "BIBREF15" }, { "start": 245, "end": 266, "text": "(Zhang et al., 2013b;", "ref_id": "BIBREF43" }, { "start": 267, "end": 286, "text": "Zhang et al., 2014)", "ref_id": "BIBREF44" }, { "start": 476, "end": 494, "text": "Zhou et al. (2015)", "ref_id": "BIBREF48" }, { "start": 497, "end": 516, "text": "Weiss et al. (2015)", "ref_id": "BIBREF33" }, { "start": 519, "end": 545, "text": "Watanabe and Sumita (2015)", "ref_id": "BIBREF32" }, { "start": 550, "end": 569, "text": "Andor et al. (2016)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "We proposed a word-based neural model for Chinese segmentation, which exploits not only character embeddings as previous work does, but also word embeddings pre-trained from large scale corpus. The model achieved comparable performances compared with a discrete word-based baseline, and also the state-of-the-art characterbased neural models in the literature. We further demonstrated that the model can utilize discrete features conveniently, resulting in a combined model that achieved top performances compared with previous work. Finally, we conducted several comparisons to study the differences between our word-based model with character-based neural models, showing that they have different error characteristics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The term in this paper is used to denote the neural network structure with convolutional layers, which is different from the typical convolution neural network that has a pooling layer upon convolutional layers(Krizhevsky et al., 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/SUTDNLP/NNTransitionSegmentor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://word2vec.googlecode.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In all our experiments, we fix the character unigram and bigram embeddings, because fine-tuning of these embeddings results in little changes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The code is released for research reference under GPL at https://github.com/SUTDNLP/NNSegmentation.8 We use a concatenation of character unigram and bigram embeddings at each position as the input to LSTM, because our experiments show that the character bigram embeddings are useful, without which character-based neural models are significantly lower than their discrete counterparts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Bi-directional LSTM is slightly better than a single leftright LSTM used inChen et al. (2015b).10 The results ofChen et al. (2015a) and Chen et al. (2015b) are not listed, because they take a preprocessing step by replacing Chinese idioms with a uniform symbol in their test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers, Yijia Liu and Hai Zhao for their constructive comments, which help to improve the final paper. This work is supported by National Natural Science Foundation of China (NSFC) under grant 61170148, Natural Science Foundation of Heilongjiang Province (China) under grant No.F2016036, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. Yue Zhang is the corresponding author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Globally normalized transition-based neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Andor", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Aliaksei", "middle": [], "last": "Severyn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Presta", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally nor- malized transition-based neural networks. In Pro- ceedings of the ACL 2016.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A hybrid markov/semi-markov conditional random field for sequence segmentation", "authors": [ { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmentation.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Proceedings of the 2006 Conference on EMNLP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "465--472", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 2006 Conference on EMNLP, pages 465-472, Sydney, Australia, July.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural word segmentation learning for Chinese", "authors": [ { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng Cai and Hai Zhao. 2016. Neural word segmen- tation learning for Chinese. In Proceedings of ACL 2016.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53nd ACL", "volume": "", "issue": "", "pages": "1744--1753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53nd ACL, pages 1744-1753, July.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Long short-term memory neural networks for chinese word segmentation", "authors": [ { "first": "Xinchi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Chenxi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 EMNLP", "volume": "", "issue": "", "pages": "1197--1206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmen- tation. In Proceedings of the 2015 EMNLP, pages 1197-1206, September.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Incremental parsing with the perceptron algorithm", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Brian Roark. 2004. Incremen- tal parsing with the perceptron algorithm. In Pro- ceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 111-118, Barcelona, Spain, July.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural lan- guage processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adaptive subgradient methods for online learning and stochastic optimization", "authors": [ { "first": "John", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "Elad", "middle": [], "last": "Hazan", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2011, "venue": "The Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2121--2159", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Ma- chine Learning Research, 12:2121-2159.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural crf parsing", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53nd ACL", "volume": "", "issue": "", "pages": "302--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2015. Neural crf pars- ing. In Proceedings of the 53nd ACL, pages 302- 312, July.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53nd ACL", "volume": "", "issue": "", "pages": "334--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of the 53nd ACL, pages 334-343, July.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The second international chinese word segmentation bakeoff", "authors": [ { "first": "Thomas", "middle": [ "Emerson" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "123--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Emerson. 2005. The second international chi- nese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Lan- guage Processing, pages 123-133.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2012, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "1097--1105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097-1105.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient backprop", "authors": [ { "first": "L\u00e9on", "middle": [], "last": "Yann A Lecun", "suffix": "" }, { "first": "Genevieve", "middle": [ "B" ], "last": "Bottou", "suffix": "" }, { "first": "Klaus-Robert", "middle": [], "last": "Orr", "suffix": "" }, { "first": "", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2012, "venue": "Neural networks: Tricks of the trade", "volume": "", "issue": "", "pages": "9--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann A LeCun, L\u00e9on Bottou, Genevieve B Orr, and Klaus-Robert M\u00fcller. 2012. Efficient backprop. In Neural networks: Tricks of the trade, pages 9-48. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Incremental joint extraction of entity mentions and relations", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the ACL 2014.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Exploring segment representations for neural segmentation models", "authors": [ { "first": "Yijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of IJCAI 2016.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Accurate linear-time chinese word segmentation via embedding matching", "authors": [ { "first": "Jianqiang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Erhard", "middle": [], "last": "Hinrichs", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53nd ACL", "volume": "", "issue": "", "pages": "1733--1743", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianqiang Ma and Erhard Hinrichs. 2015. Accurate linear-time chinese word segmentation via embed- ding matching. In Proceedings of the 53nd ACL, pages 1733-1743, July.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Feature-based neural language model and chinese word segmentation", "authors": [ { "first": "Mairgup", "middle": [], "last": "Mansur", "suffix": "" }, { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1271--1277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and chinese word segmentation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1271-1277, Nagoya, Japan, October. Asian Federation of Natural Lan- guage Processing.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Morphological analysis for unsegmented languages using recurrent neural network language model", "authors": [ { "first": "Hajime", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on EMNLP", "volume": "", "issue": "", "pages": "2292--2297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hajime Morita, Daisuke Kawahara, and Sadao Kuro- hashi. 2015. Morphological analysis for unseg- mented languages using recurrent neural network language model. In Proceedings of the 2015 Con- ference on EMNLP, pages 2292-2297.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Maxmargin tensor neural network for chinese word segmentation", "authors": [ { "first": "Wenzhe", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd ACL", "volume": "", "issue": "", "pages": "293--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for chinese word seg- mentation. In Proceedings of the 52nd ACL, pages 293-303, Baltimore, Maryland, June.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Chinese segmentation and new word detection using conditional random fields", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fangfang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Coling", "volume": "", "issue": "", "pages": "562--568", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In Proceedings of Coling 2004, pages 562-568, Geneva, Switzer- land, Aug 23-Aug 27.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks", "authors": [ { "first": "Yanxin", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1707--1712", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded seg- mentation and labeling tasks. In IJCAI, pages 1707- 1712.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Enhancing chinese word segmentation using unlabeled data", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on EMNLP", "volume": "", "issue": "", "pages": "970--979", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing chinese word segmentation using unlabeled data. In Pro- ceedings of the 2011 Conference on EMNLP, pages 970-979, July.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A discriminative latent variable chinese segmenter with hybrid word/character information", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yaozhong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL 2009", "volume": "", "issue": "", "pages": "56--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'ichi Tsujii. 2009. A dis- criminative latent variable chinese segmenter with hybrid word/character information. In Proceedings of NAACL 2009, pages 56-64, June.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th ACL", "volume": "", "issue": "", "pages": "253--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast on- line training with frequency-adaptive learning rates for chinese word segmentation and new word detec- tion. In Proceedings of the 50th ACL, pages 253- 262, July.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Word-based and character-based word segmentation models: Comparison and combination", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Coling 2010: Posters", "volume": "", "issue": "", "pages": "1211--1219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and combi- nation. In Coling 2010: Posters, pages 1211-1219, August.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A conditional random field word segmenter for sighan bakeoff", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Pichuan", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the fourth SIGHAN workshop", "volume": "", "issue": "", "pages": "168--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A condi- tional random field word segmenter for sighan bake- off 2005. In Proceedings of the fourth SIGHAN workshop, pages 168-171.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, July.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Effect of non-linear deep architecture in sequence labeling", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1285--1291", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang and Christopher D. Manning. 2013. Effect of non-linear deep architecture in sequence labeling. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1285-1291, Nagoya, Japan, October. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving chinese word segmentation and pos tagging with semi-supervised methods using large auto-analyzed data", "authors": [ { "first": "Yiou", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yoshimasa", "middle": [], "last": "Kazama", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th IJCNLP", "volume": "", "issue": "", "pages": "309--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiou Wang, Jun'ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Tori- sawa. 2011. Improving chinese word segmenta- tion and pos tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th IJCNLP, pages 309-317, Chiang Mai, Thailand, November.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Two knives cut better than one: Chinese word segmentation with dual decomposition", "authors": [ { "first": "Mengqiu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Voigt", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd ACL", "volume": "", "issue": "", "pages": "193--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mengqiu Wang, Rob Voigt, and Christopher D. Man- ning. 2014. Two knives cut better than one: Chi- nese word segmentation with dual decomposition. In Proceedings of the 52nd ACL, pages 193-198, Baltimore, Maryland, June.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Transitionbased neural constituent parsing", "authors": [ { "first": "Taro", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd ACL", "volume": "", "issue": "", "pages": "1169--1179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taro Watanabe and Eiichiro Sumita. 2015. Transition- based neural constituent parsing. In Proceedings of the 53rd ACL, pages 1169-1179, July.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Structured training for neural network transition-based parsing", "authors": [ { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd ACL", "volume": "", "issue": "", "pages": "323--333", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd ACL, pages 323-333, July.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Dependency-based gated recursive neural network for chinese word segmentation", "authors": [ { "first": "Jingjing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for chinese word seg- mentation. In Proceedings of ACL 2016.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Chinese word segmentation as character tagging", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2003, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "8", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue. 2003. Chinese word segmentation as character tagging. International Journal of Compu- tational Linguistics and Chinese Language Process- ing, 8(1).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sequence-to-sequence neural net models for grapheme-to-phoneme conversion", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.00196" ] }, "num": null, "urls": [], "raw_text": "Sequence-to-sequence neural net models for grapheme-to-phoneme conversion. arXiv preprint arXiv:1506.00196.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Chinese segmentation with a word-based perceptron algorithm", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th ACL", "volume": "", "issue": "", "pages": "840--847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2007. Chinese seg- mentation with a word-based perceptron algorithm. In Proceedings of the 45th ACL, pages 840-847, Prague, Czech Republic, June.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Syntactic processing using the generalized perceptron and beam search", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics", "volume": "37", "issue": "1", "pages": "105--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic pro- cessing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Transition-based dependency parsing with rich non-local features", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th ACL", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th ACL, pages 188-193, June.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Combining discrete and continuous features for deterministic transition-based dependency parsing", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 EMNLP", "volume": "", "issue": "", "pages": "1316--1321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang and Yue Zhang. 2015. Combin- ing discrete and continuous features for determin- istic transition-based dependency parsing. In Pro- ceedings of the 2015 EMNLP, pages 1316-1321, September.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Exploring representations from unlabeled data with co-training for Chinese word segmentation", "authors": [ { "first": "Longkai", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Mairgup", "middle": [], "last": "Mansur", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the EMNLP 2013", "volume": "", "issue": "", "pages": "311--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013a. Exploring representations from un- labeled data with co-training for Chinese word seg- mentation. In Proceedings of the EMNLP 2013, pages 311-321, Seattle, Washington, USA, October.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Chinese parsing exploiting characters", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st ACL", "volume": "", "issue": "", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013b. Chinese parsing exploiting characters. In Proceedings of the 51st ACL, pages 125-134, Au- gust.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Character-level chinese dependency parsing", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd ACL", "volume": "", "issue": "", "pages": "1326--1336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level chinese dependency parsing. In Proceedings of the 52nd ACL, pages 1326-1336, Baltimore, Maryland, June.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Effective tag set selection in chinese word segmentation via conditional random field modeling", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bao-Liang", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of PACLIC", "volume": "20", "issue": "", "pages": "87--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in chinese word segmentation via conditional random field model- ing. In Proceedings of PACLIC, volume 20, pages 87-94. Citeseer.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Character-level dependencies in chinese: Usefulness and learning", "authors": [ { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the EACL", "volume": "", "issue": "", "pages": "879--887", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hai Zhao. 2009. Character-level dependencies in chi- nese: Usefulness and learning. In Proceedings of the EACL, pages 879-887, Athens, Greece, March.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Deep learning for Chinese word segmentation and POS tagging", "authors": [ { "first": "Xiaoqing", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Hanyang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on EMNLP", "volume": "", "issue": "", "pages": "647--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Con- ference on EMNLP, pages 647-657, Seattle, Wash- ington, USA, October.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "A neural probabilistic structuredprediction model for transition-based dependency parsing", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd ACL", "volume": "", "issue": "", "pages": "1213--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structured- prediction model for transition-based dependency parsing. In Proceedings of the 53rd ACL, pages 1213-1222, July.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Fast and accurate shiftreduce constituent parsing", "authors": [ { "first": "Muhua", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st ACL", "volume": "", "issue": "", "pages": "434--443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift- reduce constituent parsing. In Proceedings of the 51st ACL, pages 434-443, August.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Segmentation process of \"\u4e2d \u56fd (Chinese) \u5916 \u4f01 (foreign company) \u4e1a \u52a1 (business) \u53d1\u5c55 (develop) \u8fc5\u901f (quickly)\".", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Scorer for the neural transition-based Chinese word segmentation model. We denote the last word in the buffer as w 0 , the next incoming character as c 0 in the queue in consistent withFigure2, and the last applied action as a 0 .", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Input representations of LSTMS for r a (actions) r w (words) and r c (characters).", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "as shown inFigure 4(b).", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "(wi)) = 50, d(ea(ai)) = 20 d(ec(ci)) = 50, d(e bc (ci\u22121ci)) = 50 Training \u03bb = 10 \u22128 , \u03b1 = 0.01, \u03b7 = 0.2", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "F-measure against character length.", "type_str": "figure", "uris": null }, "FIGREF7": { "num": null, "text": "F-measure against word length, where the boxes with red dots denote the performances of word-based neural model, and the boxes with blue slant lines denote character-based neural model.", "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "num": null, "content": "", "text": "Statistics of datasets.", "html": null }, "TABREF3": { "type_str": "table", "num": null, "content": "
1
0.96
neural0.88 0.92
0.84
0.8 0.84 0.88 0.92 0.96 0.81
discrete
", "text": "Feature experiments.", "html": null }, "TABREF5": { "type_str": "table", "num": null, "content": "", "text": "Main results on CTB60 test dataset.", "html": null } } } }