{ "paper_id": "D15-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:28:01.443857Z" }, "title": "Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs", "authors": [ { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "miguel.ballesteros@upf.edu" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University", "location": { "settlement": "Pittsburgh", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "nasmith@cs.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present extensions to a continuousstate dependency parsing method that makes it applicable to morphologically rich languages. Starting with a highperformance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "pdf_parse": { "paper_id": "D15-1041", "_pdf_hash": "", "abstract": [ { "text": "We present extensions to a continuousstate dependency parsing method that makes it applicable to morphologically rich languages. Starting with a highperformance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "At the heart of natural language parsing is the challenge of representing the \"state\" of an algorithmwhat parts of a parse have been built and what parts of the input string are not yet accounted foras it incrementally constructs a parse. Traditional approaches rely on independence assumptions, decomposition of scoring functions, and/or greedy approximations to keep this space manageable. Continuous-state parsers have been proposed, in which the state is embedded as a vector (Titov and Henderson, 2007; Stenetorp, 2013; Chen and Manning, 2014; Zhou et al., 2015; Weiss et al., 2015) . Dyer et al. reported state-of-the-art performance on English and Chinese benchmarks using a transition-based parser whose continuous-state embeddings were constructed using LSTM recurrent neural networks (RNNs) whose parameters were estimated to maximize the probability of a gold-standard sequence of parse actions.", "cite_spans": [ { "start": 480, "end": 507, "text": "(Titov and Henderson, 2007;", "ref_id": "BIBREF46" }, { "start": 508, "end": 524, "text": "Stenetorp, 2013;", "ref_id": "BIBREF44" }, { "start": 525, "end": 548, "text": "Chen and Manning, 2014;", "ref_id": "BIBREF8" }, { "start": 549, "end": 567, "text": "Zhou et al., 2015;", "ref_id": "BIBREF54" }, { "start": 568, "end": 587, "text": "Weiss et al., 2015)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The primary contribution made in this work is to take the idea of continuous-state parsing a step further by making the word embeddings that are used to construct the parse state sensitive to the morphology of the words. 1 Since it it is well known that a word's form often provides strong evidence regarding its grammatical role in morphologically rich languages (Ballesteros, 2013, inter alia) , this has promise to improve accuracy and statistical efficiency relative to traditional approaches that treat each word type as opaque and independently modeled. In the traditional parameterization, words with similar grammatical roles will only be embedded near each other if they are observed in similar contexts with sufficient frequency. Our approach reparameterizes word embeddings using the same RNN machinery used in the parser: a word's vector is calculated based on the sequence of orthographic symbols representing it ( \u00a73).", "cite_spans": [ { "start": 221, "end": 222, "text": "1", "ref_id": null }, { "start": 364, "end": 395, "text": "(Ballesteros, 2013, inter alia)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although our model is provided no supervision in the form of explicit morphological annotation, we find that it gives a large performance increase when parsing morphologically rich languages in the SPMRL datasets (Seddah et al., 2013; Seddah and Tsarfaty, 2014) , especially in agglutinative languages and the ones that present extensive case systems ( \u00a74). In languages that show little morphology, performance remains good, showing that the RNN composition strategy is capable of capturing both morphological regularities and arbitrariness in the sense of Saussure (1916) . Finally, a particularly noteworthy result is that we find that character-based word embeddings in some cases obviate explicit POS information, which is usually found to be indispensable for accurate parsing.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Seddah et al., 2013;", "ref_id": "BIBREF38" }, { "start": 235, "end": 261, "text": "Seddah and Tsarfaty, 2014)", "ref_id": "BIBREF37" }, { "start": 558, "end": 573, "text": "Saussure (1916)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A secondary contribution of this work is to show that the continuous-state parser of can learn to generate nonprojective trees. We do this by augmenting its transition operations with a SWAP operation (Nivre, 2009) ( \u00a72.4), enabling the parser to produce nonprojective dependencies which are often found in morphologically rich languages.", "cite_spans": [ { "start": 201, "end": 214, "text": "(Nivre, 2009)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin by reviewing the parsing approach of on which our work is based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "Like most transition-based parsers, Dyer et al.'s parser can be understood as the sequential manipulation of three data structures: a buffer B initialized with the sequence of words to be parsed, a stack S containing partially-built parses, and a list A of actions previously taken by the parser. In particular, the parser implements the arc-standard parsing algorithm (Nivre, 2004) .", "cite_spans": [ { "start": 369, "end": 382, "text": "(Nivre, 2004)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "At each time step t, a transition action is applied that alters these data structures by pushing or popping words from the stack and the buffer; the operations are listed in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 174, "end": 182, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "Along with the discrete transitions above, the parser calculates a vector representation of the states of B, S, and A; at time step t these are denoted by b t , s t , and a t , respectively. The total parser state at t is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p t = max {0, W[s t ; b t ; a t ] + d}", "eq_num": "(1)" } ], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "where the matrix W and the vector d are learned parameters. This continuous-state representation p t is used to decide which operation to apply next, updating B, S, and A ( Figure 1 ). We elaborate on the design of b t , s t , and a t using RNNs in \u00a72.1, on the representation of partial parses in S in \u00a72.2, and on the parser's decision mechanism in \u00a72.3. We discuss the inclusion of SWAP in \u00a72.4.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 181, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "An LSTM Dependency Parser", "sec_num": "2" }, { "text": "RNNs are functions that read a sequence of vectors incrementally; at time step t the vector x t is read in and the hidden state h t computed using x t and the previous hidden state h t\u22121 . In principle, this allows retaining information from time steps in the distant past, but the nonlinear \"squashing\" functions applied in the calcluation of each h t result in a decay of the error signal used in training with backpropagation. LSTMs are a variant of RNNs designed to cope with this \"vanishing gradient\" problem using an extra memory \"cell\" (Hochreiter and Schmidhuber, 1997; Graves, 2013) .", "cite_spans": [ { "start": 543, "end": 577, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF24" }, { "start": 578, "end": 591, "text": "Graves, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Stack LSTMs", "sec_num": "2.1" }, { "text": "Past work explains the computation within an LSTM through the metaphors of deciding how much of the current input to pass into memory (i t ) or forget (f t ). We refer interested readers to the original papers and present only the recursive equations updating the memory cell c t and hidden state h t given x t , the previous hidden state h t\u22121 , and the memory cell c t\u22121 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack LSTMs", "sec_num": "2.1" }, { "text": "i t = \u03c3(W ix x t + W ih h t\u22121 + W ic c t\u22121 + b i ) f t = 1 \u2212 i t c t = f t c t\u22121 + i t tanh(W cx x t + W ch h t\u22121 + b c ) o t = \u03c3(W ox x t + W oh h t\u22121 + W oc c t + b o ) h t = o t tanh(c t ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack LSTMs", "sec_num": "2.1" }, { "text": "where \u03c3 is the component-wise logistic sigmoid function and is the component-wise (Hadamard) product. Parameters are all represented using W and b. This formulation differs slightly from the classic LSTM formulation in that it makes use of \"peephole connections\" (Gers et al., 2002) and defines the forget gate so that it sums with the input gate to 1 (Greff et al., 2015) . To improve the representational capacity of LSTMs (and RNNs generally), they can be stacked in \"layers.\" In these architectures, the input LSTM at higher layers at time t is the value of h t computed by the lower layer (and x t is the input at the lowest layer).", "cite_spans": [ { "start": 263, "end": 282, "text": "(Gers et al., 2002)", "ref_id": "BIBREF16" }, { "start": 352, "end": 372, "text": "(Greff et al., 2015)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Stack LSTMs", "sec_num": "2.1" }, { "text": "The stack LSTM augments the left-to-right sequential model of the conventional LSTM with a stack pointer. As in the LSTM, new inputs are added in the right-most position, but the stack pointer indicates which LSTM cell provides c t\u22121 and h t\u22121 for the computation of the next iterate. Further, the stack LSTM provides a pop operation that moves the stack pointer to the previous element. Hence each of the parser data structures (B, S, and A) is implemented with its own stack LSTM, each with its own parameters. The values of b t , s t , and a t are the h t vectors from their respective stack LSTMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stack LSTMs", "sec_num": "2.1" }, { "text": "Whenever a REDUCE operation is selected, two tree fragments are popped off of S and combined to form a new tree fragment, which is then popped back onto S (see Figure 1 ). This tree must be embedded as an input vector x t .", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 168, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": "To do this, Dyer et al. (2015) use a recursive neural network g r (for relation r) that composes Figure 1 : Parser transitions indicating the action applied to the stack and buffer and the resulting stack and buffer states. Bold symbols indicate (learned) embeddings of words and relations, script symbols indicate the corresponding words and relations. used the SHIFT and REDUCE operations in their continuous-state parser; we add SWAP.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 105, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": "Stack t Buffer t Action Stack t+1 Buffer t+1 Dependency (u, u), (v, v), S B REDUCE-RIGHT(r) (g r (u, v), u), S B u r \u2192 v (u, u), (v, v), S B REDUCE-LEFT(r) (g r (v, u), v), S B u r \u2190 v S (u, u), B SHIFT (u, u), S B - (u, u), (v, v), S B SWAP (u, u), S (v, v), B -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": "the representations of the two subtrees popped from S (we denote these by u and v), resulting in a new vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": "g r (u, v) or g r (v, u)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": ", depending on the direction of attachment. The resulting vector embeds the tree fragment in the same space as the words and other tree fragments. This kind of composition was thoroughly explored in prior work (Socher et al., 2011; Socher et al., 2013b; Hermann and Blunsom, 2013; Socher et al., 2013a) ; for details, see .", "cite_spans": [ { "start": 210, "end": 231, "text": "(Socher et al., 2011;", "ref_id": "BIBREF41" }, { "start": 232, "end": 253, "text": "Socher et al., 2013b;", "ref_id": "BIBREF43" }, { "start": 254, "end": 280, "text": "Hermann and Blunsom, 2013;", "ref_id": null }, { "start": 281, "end": 302, "text": "Socher et al., 2013a)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Functions", "sec_num": "2.2" }, { "text": "The parser uses a probabilistic model of parser decisions at each time step t. Letting A(S, B) denote the set of allowed transitions given the stack S and buffer S (i.e., those where preconditions are met; see Figure 1 ), the probability of action z \u2208 A(S, B) defined using a log-linear distribution:", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Predicting Parser Decisions", "sec_num": "2.3" }, { "text": "p(z | p t ) = exp g z p t + q z z \u2208A(S,B) exp g z p t + q z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Parser Decisions", "sec_num": "2.3" }, { "text": "(2) (where g z and q z are parameters associated with each action type z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Parser Decisions", "sec_num": "2.3" }, { "text": "Parsing proceeds by always choosing the most probable action from A(S, B). The probabilistic definition allows parameter estimation for all of the parameters (W * , b * in all three stack LSTMs, as well as W, d, g * , and q * ) by maximizing the conditional likelihood of each correct parser decisions given the state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Parser Decisions", "sec_num": "2.3" }, { "text": "Dyer et al. (2015)'s parser implemented the most basic version of the arc-standard algorithm, which is capable of producing only projective parse trees. In order to deal with nonprojective trees, we also add the SWAP operation which allows nonprojective trees to be produced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding the SWAP Operation", "sec_num": "2.4" }, { "text": "The SWAP operation, first introduced by Nivre (2009), allows a transition-based parser to produce nonprojective trees. Here, the inclusion of the SWAP operation requires breaking the linearity of the stack by removing tokens that are not at the top of the stack. This is easily handled with the stack LSTM. Figure 1 shows how the parser is capable of moving words from the stack (S) to the buffer (B), breaking the linear order of words. Since a node that is swapped may have already been assigned as the head of a dependent, the buffer (B) can now also contain tree fragments.", "cite_spans": [], "ref_spans": [ { "start": 307, "end": 315, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Adding the SWAP Operation", "sec_num": "2.4" }, { "text": "The main contribution of this paper is to change the word representations. In this section, we present the standard word embeddings as in , and the improvements we made generating word embeddings designed to capture morphology based on orthographic strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Representations", "sec_num": "3" }, { "text": "Dyer et al.'s parser generates a word representation for each input token by concatenating two vectors: a vector representation for each word type (w) and a representation (t) of the POS tag of the token (if it is used), provided as auxiliary input to the parser. 2 A linear map (V) is applied to the resulting vector and passed through a component-wise ReLU:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline: Standard Word Embeddings", "sec_num": "3.1" }, { "text": "x = max {0, V[w; t] + b}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline: Standard Word Embeddings", "sec_num": "3.1" }, { "text": "For out-of-vocabulary words, the parser uses an \"UNK\" token that is handled as a separate word during parsing time. This mapping can be shown schematically as in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Baseline: Standard Word Embeddings", "sec_num": "3.1" }, { "text": "t JJ x UNK w t NN x party w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline: Standard Word Embeddings", "sec_num": "3.1" }, { "text": "Figure 2: Baseline model word embeddings for an in-vocabulary word that is tagged with POS tag NN (right) and an out-of-vocabulary word with POS tag JJ (left).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline: Standard Word Embeddings", "sec_num": "3.1" }, { "text": "Following Ling et al. 2015, we compute character-based continuous-space vector embeddings of words using bidirectional LSTMs (Graves and Schmidhuber, 2005) . When the parser initiates the learning process and populates the buffer with all the words from the sentence, it reads the words character by character from left to right and computes a continuous-space vector embedding the character sequence, which is the h vector of the LSTM; we denote it by \u2192 w. The same process is also applied in reverse (albeit with different parameters), computing a similar continuous-space vector embedding starting from the last character and finishing at the first ( \u2190 w); again each character is represented with an LSTM cell. After that, we concatenate these vectors and a (learned) representation of their tag to produce the representation w. As in \u00a73.1, a linear map (V) is applied and passed through a component-wise ReLU.", "cite_spans": [ { "start": 125, "end": 155, "text": "(Graves and Schmidhuber, 2005)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Character-Based Embeddings of Words", "sec_num": "3.2" }, { "text": "x = max 0, V[ \u2192 w; \u2190 w; t] + b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Based Embeddings of Words", "sec_num": "3.2" }, { "text": "This process is shown schematically in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Character-Based Embeddings of Words", "sec_num": "3.2" }, { "text": "Note that under this representation, out-ofvocabulary words are treated as bidirectional LSTM encodings and thus they will be \"close\" to other words that the parser has seen during training, ideally close to their more frequent, syntactically similar morphological relatives. We conjecture that this will give a clear advantage over a single \"UNK\" token for all the words that the parser does not see during training, as done by and other parsers without additional resources. In \u00a74 we confirm this hypothesis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character-Based Embeddings of Words", "sec_num": "3.2" }, { "text": "We applied our parsing model and several variations of it to several parsing tasks and report re- sults below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In order to find out whether the character-based representations are capable of learning the morphology of words, we applied the parser to morphologically rich languages specifically the treebanks of the SPMRL shared task (Seddah et al., 2013; Seddah and Tsarfaty, 2014) : Arabic (Maamouri et al., 2004) , Basque (Aduriz et al., 2003) , French (Abeill\u00e9 et al., 2003) , German (Seeker and Kuhn, 2012) , Hebrew (Sima'an et al., 2001 ), Hungarian (Vincze et al., 2010) , Korean (Choi, 2013), Polish (\u015awidzi\u0144ski and Woli\u0144ski, 2010) and Swedish (Nivre et al., 2006b) . For all the corpora of the SPMRL Shared Task we used predicted POS tags as provided by the shared task organizers. 3 For these datasets, evaluation is calculated using eval07.pl, which includes punctuation. We also experimented with the Turkish dependency treebank 4 (Oflazer et al., 2003) of the CoNLL-X Shared Task (Buchholz and Marsi, 2006) . We used gold POS tags, as is common with the CoNLL-X data sets.", "cite_spans": [ { "start": 222, "end": 243, "text": "(Seddah et al., 2013;", "ref_id": "BIBREF38" }, { "start": 244, "end": 270, "text": "Seddah and Tsarfaty, 2014)", "ref_id": "BIBREF37" }, { "start": 280, "end": 303, "text": "(Maamouri et al., 2004)", "ref_id": "BIBREF27" }, { "start": 313, "end": 334, "text": "(Aduriz et al., 2003)", "ref_id": "BIBREF1" }, { "start": 344, "end": 366, "text": "(Abeill\u00e9 et al., 2003)", "ref_id": "BIBREF0" }, { "start": 376, "end": 399, "text": "(Seeker and Kuhn, 2012)", "ref_id": "BIBREF39" }, { "start": 409, "end": 430, "text": "(Sima'an et al., 2001", "ref_id": "BIBREF40" }, { "start": 444, "end": 465, "text": "(Vincze et al., 2010)", "ref_id": "BIBREF49" }, { "start": 540, "end": 561, "text": "(Nivre et al., 2006b)", "ref_id": "BIBREF32" }, { "start": 831, "end": 853, "text": "(Oflazer et al., 2003)", "ref_id": "BIBREF35" }, { "start": 881, "end": 907, "text": "(Buchholz and Marsi, 2006)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "To put our results in context with the most recent neural network transition-based parsers, we run the parser in the same Chinese and English setups as Chen and Manning (2014) and . For Chinese, we use the Penn Chinese Treebank 5.1 (CTB5) following Zhang and Clark (2008b) , 5 with gold POS tags. For English, we used the Stanford Dependency (SD) representation of the Penn Treebank 6 (Marcus et al., 1993; Marneffe et al., 2006) . 7 . Results for Turkish, Chinese, and English are calculated using the CoNLL-X eval.pl script, which ignores punctuation symbols.", "cite_spans": [ { "start": 152, "end": 175, "text": "Chen and Manning (2014)", "ref_id": "BIBREF8" }, { "start": 249, "end": 272, "text": "Zhang and Clark (2008b)", "ref_id": "BIBREF52" }, { "start": 385, "end": 406, "text": "(Marcus et al., 1993;", "ref_id": "BIBREF28" }, { "start": 407, "end": 429, "text": "Marneffe et al., 2006)", "ref_id": "BIBREF29" }, { "start": 432, "end": 433, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.1" }, { "text": "In order to isolate the improvements provided by the LSTM encodings of characters, we run the stack LSTM parser in the following configurations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "\u2022 Words: words only, as in \u00a73.1 (but without POS tags)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "\u2022 Chars: character-based representations of words with bidirectional LSTMs, as in \u00a73.2 (but without POS tags)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "\u2022 Words + POS: words and POS tags ( \u00a73.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "\u2022 Chars + POS: character-based representations of words with bidirectional LSTMs plus POS tags ( \u00a73.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "None of the experimental configurations include pretrained word-embeddings or any additional data resources. All experiments include the SWAP transition, meaning that nonprojective trees can be produced in any language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "Dimensionality. The full version of our parsing model sets dimensionalities as follows. LSTM hidden states are of size 100, and we use two layers of LSTMs for each stack. Embeddings of the parser actions used in the composition functions have 20 dimensions, and the output embedding size is 20 dimensions. The learned word representations embeddings have 32 dimensions when used, while the character-based representations have 100 dimensions, when used. Part of speech embeddings have 12 dimensions. These dimensionalities were chosen after running several tests with different values, but a more careful selection of these values would probably further improve results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Configurations", "sec_num": "4.2" }, { "text": "Parameters are initialized randomly-refer to for specifics-and optimized using stochastic gradient descent (without minibatches) using derivatives of the negative log likelihood of the sequence of parsing actions computed using backpropagation. Training is stopped when the learned model's UAS stops improving on the development set, and this model is used to parse the test set. No pretraining of any parameters is done.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "4.3" }, { "text": "Tables 1 and 2 show the results of the parsers for the development sets and the final test sets, respectively. Most notable are improvements for agglutinative languages-Basque, Hungarian, Korean, and Turkish-both when POS tags are included and when they are not. Consistently, across all languages, Chars outperforms Words, suggesting that the character-level LSTMs are learning representations that capture similar information to parts of speech. On average, Chars is on par with Words + POS, and the best average of labeled attachment scores is achieved with Chars + POS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "It is common practice to encode morphological information in treebank POS tags; for instance, the Penn Treebank includes English number and tense (e.g., NNS is plural noun and VBD is verb in past tense). Even if our character-based representations are capable of encoding the same kind of information, existing POS tags suffice for high accuracy. However, the POS tags in treebanks for morphologically rich languages do not seem to be enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "Swedish, English, and French use suffixes for the verb tenses and number, 8 while Hebrew uses prepositional particles rather than grammatical case. Tsarfaty (2006) and Cohen and Smith (2007) argued that, for Hebrew, determining the correct morphological segmentation is dependent on syntactic context. Our approach sidesteps this step, capturing the same kind of information in the vectors, and learning it from syntactic context. Even for Chinese, which is not morphologically rich, Chars shows a benefit over Words, perhaps by capturing regularities in syllable structure within words. ", "cite_spans": [ { "start": 148, "end": 163, "text": "Tsarfaty (2006)", "ref_id": "BIBREF48" }, { "start": 168, "end": 190, "text": "Cohen and Smith (2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.4" }, { "text": "The character-based representation for words is notably beneficial for out-of-vocabulary (OOV) words. We tested this specifically by comparing Chars to a model in which all OOVs are replaced by the string \"UNK\" during parsing. This always has a negative effect on LAS (average \u22124.5 points, \u22122.8 UAS). Figure 5 shows how this drop varies with the development OOV rate across treebanks; most extreme is Korean, which drops 15.5 LAS. A similar, but less pronounced pattern, was observed for models that include POS. Interestingly, this artificially impoverished model is still consistently better than Words for all languages (e.g., for Korean, by 4 LAS). This implies that not all of the improvement is due to OOV words; statistical sharing across orthographically close words is beneficial, as well.", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 309, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Out-of-Vocabulary Words", "sec_num": "4.4.2" }, { "text": "The character-based representations make the parser slower, since they require composing the character-based bidirectional LSTMs for each word of the input sentence; however, at test time these results could be cached. On average, Words parses a sentence in 44 ms, whileChars needs 130 ms. 9 Training time is affected by the same cons- 9 We are using a machine with 32 Intel Xeon CPU E5-2650 at 2.00GHz; the parser runs on a single core. tant, needing some hours to have a competitive model. In terms of memory, Words requires on average 300 MB of main memory for both training and parsing, while Chars requires 450 MB. Table 3 shows a comparison with state-of-theart parsers. We include greedy transition-based parsers that, like ours, do not apply a beam search (Zhang and Clark, 2008b) or a dynamic oracle . For all the SPMRL languages we show the results of Ballesteros (2013), who reported results after carrying out a careful automatic morphological feature selection experiment. For Turkish, we show the results of Nivre et al. (2006a) which also carried out a careful manual morphological feature selection. Our parser outperforms these in most cases. Since those systems rely on morphological features, we believe that this comparison shows even more that the character-based representations are capturing morphological information, though without explicit morphological features. For English and Chinese, we report which is Words + POS but with pretrained word embeddings.", "cite_spans": [ { "start": 336, "end": 337, "text": "9", "ref_id": null }, { "start": 764, "end": 788, "text": "(Zhang and Clark, 2008b)", "ref_id": "BIBREF52" }, { "start": 1022, "end": 1042, "text": "Nivre et al. (2006a)", "ref_id": "BIBREF31" } ], "ref_spans": [ { "start": 620, "end": 627, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Computational Requirements", "sec_num": "4.4.3" }, { "text": "We also show the best reported results on these datasets. For the SPMRL data sets, the best performing system of the shared task is either Bj\u00f6rkelund et al. (2013) or Bj\u00f6rkelund et al. (2014) , which are consistently better than our sys- Table 3 : Test-set performance of our best results (according to UAS or LAS, whichever has the larger difference), compared to state-of-the-art greedy transition-based parsers (\"Best Greedy Result\") and best results reported (\"Best Published Result\"). All of the systems we compare against use explicit morphological features and/or one of the following: pretrained word embeddings, unlabeled data and a combination of parsers; our models do not. B'13 is Ballesteros (2013); N+'06a is Nivre et al. (2006a) ; D+'15 is ; B+'13 is Bj\u00f6rkelund et al. (2013) ; B+'14 is Bj\u00f6rkelund et al. (2014) ; K+'10 is Koo et al. (2010) ; W+'15 is Weiss et al. (2015) .", "cite_spans": [ { "start": 139, "end": 163, "text": "Bj\u00f6rkelund et al. (2013)", "ref_id": "BIBREF3" }, { "start": 167, "end": 191, "text": "Bj\u00f6rkelund et al. (2014)", "ref_id": "BIBREF4" }, { "start": 723, "end": 743, "text": "Nivre et al. (2006a)", "ref_id": "BIBREF31" }, { "start": 766, "end": 790, "text": "Bj\u00f6rkelund et al. (2013)", "ref_id": "BIBREF3" }, { "start": 802, "end": 826, "text": "Bj\u00f6rkelund et al. (2014)", "ref_id": "BIBREF4" }, { "start": 838, "end": 855, "text": "Koo et al. (2010)", "ref_id": "BIBREF25" }, { "start": 867, "end": 886, "text": "Weiss et al. (2015)", "ref_id": "BIBREF50" } ], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Comparison with State-of-the-Art", "sec_num": "4.4.4" }, { "text": "tem for all languages. Note that the comparison is harsh to our system, which does not use unlabeled data or explicit morphological features nor any combination of different parsers. For Turkish, we report the results of Koo et al. (2010) , which only reported unlabeled attachment scores. For English, we report (Weiss et al., 2015) and for Chinese, we report which is Words + POS but with pretrained word embeddings.", "cite_spans": [ { "start": 221, "end": 238, "text": "Koo et al. (2010)", "ref_id": "BIBREF25" }, { "start": 313, "end": 333, "text": "(Weiss et al., 2015)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with State-of-the-Art", "sec_num": "4.4.4" }, { "text": "Character-based representations have been explored in other NLP tasks; for instance, dos Santos and Zadrozny (2014) and dos Santos and Guimar\u00e3es (2015) learned character-level neural representations for POS tagging and named entity recognition, getting a large error reduction in both tasks. Our approach is similar to theirs. Others have used character-based models as features to improve existing models. For instance, Chrupa\u0142a (2014) used character-based recurrent neural networks to normalize tweets. Botha and Blunsom (2014) show that stems, prefixes and suffixes can be used to learn useful word representations but relying on an external morphological analyzer. That is, they learn the morpheme-meaning relationship with an additive model, whereas we do not need a morphological analyzer. Similarly, proposed joint learning of character and word embeddings for Chinese, claiming that characters contain rich information.", "cite_spans": [ { "start": 505, "end": 529, "text": "Botha and Blunsom (2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Methods for joint morphological disambiguation and parsing have been widely explored Tsarfaty (2006; Cohen and Smith (2007; Goldberg and Tsarfaty (2008; Goldberg and Elhadad (2011) . More recently, Bohnet et al. (2013) presented an arc-standard transition-based parser that performs competitively for joint morphological tagging and dependency parsing for richly inflected languages, such as Czech, Finnish, German, Hungarian, and Russian. Our model seeks to achieve a similar benefit to parsing without explicitly reasoning about the internal structure of words. Zhang et al. (2013) presented efforts on Chinese parsing with characters showing that Chinese can be parsed at the character level, and that Chinese word segmentation is useful for predicting the correct POS tags (Zhang and Clark, 2008a) .", "cite_spans": [ { "start": 85, "end": 100, "text": "Tsarfaty (2006;", "ref_id": "BIBREF48" }, { "start": 101, "end": 123, "text": "Cohen and Smith (2007;", "ref_id": "BIBREF12" }, { "start": 124, "end": 152, "text": "Goldberg and Tsarfaty (2008;", "ref_id": "BIBREF19" }, { "start": 153, "end": 180, "text": "Goldberg and Elhadad (2011)", "ref_id": "BIBREF17" }, { "start": 198, "end": 218, "text": "Bohnet et al. (2013)", "ref_id": "BIBREF5" }, { "start": 564, "end": 583, "text": "Zhang et al. (2013)", "ref_id": "BIBREF53" }, { "start": 777, "end": 801, "text": "(Zhang and Clark, 2008a)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "To the best of our knowledge, previous work has not used character-based embeddings to improve dependency parsers, as done in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We have presented several interesting findings. First, we add new evidence that character-based representations are useful for NLP tasks. In this paper, we demonstrate that they are useful for transition-based dependency parsing, since they are capable of capturing morphological information crucial for analyzing syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The improvements provided by the characterbased representations using bidirectional LSTMs are strong for agglutinative languages, such as Basque, Hungarian, Korean, and Turkish, comparing favorably to POS tags as encoded in those languages' currently available treebanks. This outcome is important, since annotating morphological information for a treebank is expensive. Our finding suggests that the best investment of annotation effort may be in dependencies, leaving morphological features to be learned implicitly from strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The character-based representations are also a way of overcoming the out-of-vocabulary problem; without any additional resources, they enable the parser to substantially improve the performance when OOV rates are high. We expect that, in conjunction with a pretraing regime, or in conjunction with distributional word embeddings, further improvements could be realized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Software for replicating the experiments is available from https://github.com/clab/lstm-parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ", included a third input representation learned from a neural language model (wLM). We do not include these pretrained representations in our experiments, focusing instead on character-based representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The POS tags were calculated with the MarMot tagger(M\u00fcller et al., 2013) by the best performing system of the SPMRL Shared Task(Bj\u00f6rkelund et al., 2013). Arabic: 97.38. Basque: 97.02. French: 97.61. German: 98.10. Hebrew: 97.09. Hungarian: 98.72. Korean: 94.03.Polish: 98.12. Swedish: 97.27. 4 Since the Turkish dependency treebank does not have a development set, we extracted the last 150 sentences from the 4996 sentences of the training set as a development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Training: 001-815, 1001-1136. Development: 886- 931, 1148-1151. Test: 816-885, 1137-1147 Training: 02-21. Development: 22. Test: 23.7 The POS tags are predicted by using the Stanford Tagger(Toutanova et al., 2003) with an accuracy of 97.3%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Tense and number features provide little improvement in a transition-based parser, compared with other features such as case, when the POS tags are included(Ballesteros, 2013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "MB was supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA). This research was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office under contract/grant number W911NF-10-1-0533 and NSF IIS-1054319. This work was completed while NAS was at CMU. Thanks to Joakim Nivre, Bernd Bohnet, Fei Liu and Swabha Swayamdipta for useful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Building a treebank for French", "authors": [ { "first": "Anne", "middle": [], "last": "Abeill\u00e9", "suffix": "" }, { "first": "Lionel", "middle": [], "last": "Cl\u00e9ment", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Toussenel", "suffix": "" } ], "year": 2003, "venue": "Treebanks", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne Abeill\u00e9, Lionel Cl\u00e9ment, and Fran\u00e7ois Toussenel. 2003. Building a treebank for French. In Treebanks. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Arantza D\u00edaz de Ilarraza, Aitzpea Garmendia, and Maite Oronoz", "authors": [ { "first": "Itziar", "middle": [], "last": "Aduriz", "suffix": "" }, { "first": "Jose", "middle": [ "Mari" ], "last": "Mar\u00eda Jes\u00fas Aranzabe", "suffix": "" }, { "first": "Aitziber", "middle": [], "last": "Arriola", "suffix": "" }, { "first": "", "middle": [], "last": "Atutxa", "suffix": "" } ], "year": 2003, "venue": "Proc of TLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Aduriz, Mar\u00eda Jes\u00fas Aranzabe, Jose Mari Arriola, Aitziber Atutxa, Arantza D\u00edaz de Ilarraza, Aitzpea Garmendia, and Maite Oronoz. 2003. Construction of a Basque dependency treebank. In Proc of TLT.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Effective morphological feature selection with maltoptimizer at the SPMRL 2013 shared task", "authors": [ { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" } ], "year": 2013, "venue": "Proc. of SPMRL-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Ballesteros. 2013. Effective morphological feature selection with maltoptimizer at the SPMRL 2013 shared task. In Proc. of SPMRL-EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Re)ranking Meets Morphosyntax: State-of-the-art Results from the SPMRL 2013 Shared Task", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Ozlem", "middle": [], "last": "Cetinoglu", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Seeker", "suffix": "" } ], "year": 2013, "venue": "SPMRL-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund, Ozlem Cetinoglu, Rich\u00e1rd Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking Meets Morphosyntax: State-of-the-art Results from the SPMRL 2013 Shared Task. In SPMRL-EMNLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introducing the IMS-Wroc\u0142aw-Szeged-CIS entry at the SPMRL 2014 Shared Task: Reranking and Morpho-syntax meet Unlabeled Data", "authors": [ { "first": "Anders", "middle": [], "last": "Bj\u00f6rkelund", "suffix": "" }, { "first": "Agnieszka", "middle": [], "last": "\u00d6zlem \u00c7 Etinoglu", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Fale\u0144ska", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "", "middle": [], "last": "Mueller", "suffix": "" } ], "year": 2014, "venue": "SPMRL-SANCL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders Bj\u00f6rkelund,\u00d6zlem \u00c7 etinoglu, Agnieszka Fale\u0144ska, Rich\u00e1rd Farkas, Thomas Mueller, Wolf- gang Seeker, and Zsolt Sz\u00e1nt\u00f3. 2014. Introducing the IMS-Wroc\u0142aw-Szeged-CIS entry at the SPMRL 2014 Shared Task: Reranking and Morpho-syntax meet Unlabeled Data. In SPMRL-SANCL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Joint morphological and syntactic analysis for richly inflected languages", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Boguslavsky", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet, Joakim Nivre, Igor Boguslavsky, Richard Farkas, Filip Ginter, and Jan Haji\u010d. 2013. Joint morphological and syntactic analysis for richly inflected languages. TACL, 1.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Compositional Morphology for Word Representations and Language Modelling", "authors": [ { "first": "Jan", "middle": [ "A" ], "last": "Botha", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan A. Botha and Phil Blunsom. 2014. Composi- tional Morphology for Word Representations and Language Modelling. In ICML.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "CoNLL-X", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "Proc of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X. In Proc of CoNLL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proc. EMNLP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Joint learning of character and word embeddings", "authors": [ { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" } ], "year": 2015, "venue": "Proc. IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huanbo Luan. 2015. Joint learning of character and word embeddings. In Proc. IJCAI.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Preparing Korean Data for the Shared Task on Parsing Morphologically Rich Languages", "authors": [ { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinho D. Choi. 2013. Preparing Korean Data for the Shared Task on Parsing Morphologically Rich Lan- guages. ArXiv e-prints, September.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Normalizing tweets with edit scripts and recurrent neural embeddings", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" } ], "year": 2014, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Chrupa\u0142a. 2014. Normalizing tweets with edit scripts and recurrent neural embeddings. In Proc of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Joint morphological and syntactic disambiguation", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Proc. EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen and Noah A. Smith. 2007. Joint mor- phological and syntactic disambiguation. In Proc. EMNLP-CoNLL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Boosting named entity recognition with neural character embeddings. Arxiv", "authors": [ { "first": "Cicero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Santos", "suffix": "" }, { "first": "", "middle": [], "last": "Guimar\u00e3es", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero Nogueira dos Santos and Victor Guimar\u00e3es. 2015. Boosting named entity recognition with neu- ral character embeddings. Arxiv.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning character-level representations for part-ofspeech tagging", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Bianca", "middle": [], "last": "Zadrozny", "suffix": "" } ], "year": 2014, "venue": "Proc of ICML-14", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of- speech tagging. In Proc of ICML-14.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proc of ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning precise timing with LSTM recurrent networks", "authors": [ { "first": "Felix", "middle": [ "A" ], "last": "Gers", "suffix": "" }, { "first": "Nicol", "middle": [ "N" ], "last": "Schraudolph", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix A. Gers, Nicol N. Schraudolph, and J\u00fcrgen Schmidhuber. 2002. Learning precise timing with LSTM recurrent networks. JMLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Joint Hebrew segmentation and parsing using a PCFG-LA lattice parser", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2011, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2011. Joint He- brew segmentation and parsing using a PCFG-LA lattice parser. In Proc of ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Training deterministic parsers with non-deterministic oracles", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. TACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A single generative model for joint morphological segmentation and syntactic parsing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2008, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Reut Tsarfaty. 2008. A single gen- erative model for joint morphological segmentation and syntactic parsing. In Proc of ACL.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2005, "venue": "Neural Networks", "volume": "18", "issue": "", "pages": "5--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5-6).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Generating sequences with recurrent neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves. 2013. Generating sequences with recur- rent neural networks. CoRR, abs/1308.0850.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "LSTM: A search space odyssey", "authors": [ { "first": "Klaus", "middle": [], "last": "Greff", "suffix": "" }, { "first": "Rupesh", "middle": [ "Kumar" ], "last": "Srivastava", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Koutn\u00edk", "suffix": "" }, { "first": "R", "middle": [], "last": "Bas", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Steunebrink", "suffix": "" }, { "first": "", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\u00edk, Bas R. Steunebrink, and J\u00fcrgen Schmidhuber. 2015. LSTM: A search space odyssey. CoRR, abs/1503.04069.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The role of syntax in vector space models of compositional semantics", "authors": [], "year": 2013, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann and Phil Blunsom. 2013. The role of syntax in vector space models of composi- tional semantics. In Proc. ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Dual decomposition for parsing with non-projective head automata", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" } ], "year": 2010, "venue": "Proc of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proc of EMNLP.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Finding function in form: Compositional character models for open vocabulary word representation", "authors": [ { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Tiago", "middle": [], "last": "Lu\u00eds", "suffix": "" }, { "first": "Lu\u00eds", "middle": [], "last": "Marujo", "suffix": "" }, { "first": "Ram\u00f3n", "middle": [], "last": "Fernandez Astudillo", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Isabel", "middle": [], "last": "Trancoso", "suffix": "" } ], "year": 2015, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang Ling, Tiago Lu\u00eds, Lu\u00eds Marujo, Ram\u00f3n Fernan- dez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In Proc. EMNLP.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus", "authors": [ { "first": "Mohamed", "middle": [], "last": "Maamouri", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bies", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Buckwalter", "suffix": "" }, { "first": "Wigdan", "middle": [], "last": "Mekki", "suffix": "" } ], "year": 2004, "venue": "NEMLAR Conference on Arabic Language Resources and Tools", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Re- sources and Tools.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Building a large annotated corpus of English: the Penn treebank", "authors": [ { "first": "Mitchell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn treebank. Computa- tional Linguistics, 19(2):313-330.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine De", "middle": [], "last": "Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proc of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine De Marneffe, Bill Maccartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc of LREC.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Efficient higher-order CRFs for morphological tagging", "authors": [ { "first": "Thomas", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2013, "venue": "Proc of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas M\u00fcller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for morphologi- cal tagging. In Proc of EMNLP.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Labeled pseudo-projective dependency parsing with support vector machines", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" } ], "year": 2006, "venue": "Proc of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, G\u00fclsen Eryigit, and Svetoslav Marinov. 2006a. Labeled pseudo-projective dependency parsing with support vector machines. In Proc of CoNLL.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Talbanken05: A Swedish treebank with phrase structure and dependency annotation", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2006, "venue": "Proc of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Jens Nilsson, and Johan Hall. 2006b. Talbanken05: A Swedish treebank with phrase structure and dependency annotation. In Proc of LREC, Genoa, Italy.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Incrementality in deterministic dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2004, "venue": "Proc of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proc of the Workshop on In- cremental Parsing: Bringing Engineering and Cog- nition Together.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Non-projective dependency parsing in expected linear time", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2009, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2009. Non-projective dependency pars- ing in expected linear time. In Proc of ACL.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Building a Turkish treebank", "authors": [ { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Bilge", "middle": [], "last": "Say", "suffix": "" }, { "first": "G\u00f6khan", "middle": [], "last": "Dilek Zeynep Hakkani-T\u00fcr", "suffix": "" }, { "first": "", "middle": [], "last": "T\u00fcr", "suffix": "" } ], "year": 2003, "venue": "Treebanks", "volume": "", "issue": "", "pages": "261--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kemal Oflazer, Bilge Say, Dilek Zeynep Hakkani-T\u00fcr, and G\u00f6khan T\u00fcr. 2003. Building a Turkish tree- bank. In Treebanks, pages 261-277. Springer.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Nature of the linguistic sign", "authors": [ { "first": "Ferdinand", "middle": [], "last": "Saussure", "suffix": "" } ], "year": 1916, "venue": "Course in General Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferdinand Saussure. 1916. Nature of the linguistic sign. In Course in General Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Introducing the SPMRL 2014 shared task on parsing morphologically-rich languages", "authors": [ { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djam\u00e9 Seddah and Reut Tsarfaty. 2014. Intro- ducing the SPMRL 2014 shared task on parsing morphologically-rich languages. SPMRL-SANCL 2014.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Overview of the SPMRL 2013 shared task: cross-framework evaluation of parsing morphologically rich languages", "authors": [ { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Iakes", "middle": [], "last": "Goenaga", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Koldo Gojenola Galletebeitia", "suffix": "" }, { "first": "Spence", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Green", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Maier", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Przepi\u00f3rkowski", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Roth", "suffix": "" }, { "first": "", "middle": [], "last": "Seeker", "suffix": "" } ], "year": 2013, "venue": "SPMRL-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djam\u00e9 Seddah, Reut Tsarfaty, Sandra K\u00fcbler, Marie Candito, Jinho D. Choi, Rich\u00e1rd Farkas, Jen- nifer Foster, Iakes Goenaga, Koldo Gojenola Gal- letebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: cross-framework evaluation of parsing morphologically rich languages. In SPMRL- EMNLP 2013.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Making Ellipses Explicit in Dependency Conversion for a German Treebank", "authors": [ { "first": "Wolfgang", "middle": [], "last": "Seeker", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2012, "venue": "Proc of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wolfgang Seeker and Jonas Kuhn. 2012. Making El- lipses Explicit in Dependency Conversion for a Ger- man Treebank. In Proc of LREC.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Building a Tree-Bank for Modern Hebrew Text", "authors": [ { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Itai", "suffix": "" }, { "first": "Yoad", "middle": [], "last": "Winter", "suffix": "" } ], "year": 2001, "venue": "Traitement Automatique des Langues", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khalil Sima'an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a Tree-Bank for Modern Hebrew Text. In Traitement Automatique des Langues.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Eric", "middle": [ "H" ], "last": "Huang", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proc of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. In Proc of NIPS.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Grounded compositional semantics for finding and describing images with sentences", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Andrej Karpathy, Quoc V. Le, Christo- pher D. Manning, and Andrew Y. Ng. 2013a. Grounded compositional semantics for finding and describing images with sentences. TACL.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [ "Y" ], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proc of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013b. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proc of EMNLP.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Transition-based dependency parsing using recursive neural networks", "authors": [ { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" } ], "year": 2013, "venue": "Proc of NIPS Deep Learning Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pontus Stenetorp. 2013. Transition-based dependency parsing using recursive neural networks. In Proc of NIPS Deep Learning Workshop.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Towards a bank of constituent parse trees for Polish", "authors": [ { "first": "Marcin", "middle": [], "last": "Marek\u015bwidzi\u0144ski", "suffix": "" }, { "first": "", "middle": [], "last": "Woli\u0144ski", "suffix": "" } ], "year": 2010, "venue": "Proc of TSD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marek\u015awidzi\u0144ski and Marcin Woli\u0144ski. 2010. To- wards a bank of constituent parse trees for Polish. In Proc of TSD.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A latent variable model for generative dependency parsing", "authors": [ { "first": "", "middle": [], "last": "Ivan", "suffix": "" }, { "first": "James", "middle": [], "last": "Titov", "suffix": "" }, { "first": "", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2007, "venue": "Proc of IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan. Titov and James. Henderson. 2007. A latent vari- able model for generative dependency parsing. In Proc of IWPT.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proc of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proc of NAACL.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Integrated morphological and syntactic disambiguation for Modern Hebrew", "authors": [ { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2006, "venue": "Proc of ACL Student Research Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reut Tsarfaty. 2006. Integrated morphological and syntactic disambiguation for Modern Hebrew. In Proc of ACL Student Research Workshop.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Hungarian dependency treebank", "authors": [ { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "D\u00f3ra", "middle": [], "last": "Szauter", "suffix": "" }, { "first": "Attila", "middle": [], "last": "Alm\u00e1si", "suffix": "" }, { "first": "Gy\u00f6rgy", "middle": [], "last": "M\u00f3ra", "suffix": "" }, { "first": "J\u00e1nos", "middle": [], "last": "Zolt\u00e1n Alexin", "suffix": "" }, { "first": "", "middle": [], "last": "Csirik", "suffix": "" } ], "year": 2010, "venue": "Proc of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Veronika Vincze, D\u00f3ra Szauter, Attila Alm\u00e1si, Gy\u00f6rgy M\u00f3ra, Zolt\u00e1n Alexin, and J\u00e1nos Csirik. 2010. Hun- garian dependency treebank. In Proc of LREC.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Structured training for neural network transition-based parsing", "authors": [ { "first": "David", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2015, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Weiss, Christopher Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proc of ACL.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Joint word segmentation and POS tagging using a single perceptron", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008a. Joint word seg- mentation and POS tagging using a single percep- tron. In Proc of ACL.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2008, "venue": "Proc of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Stephen Clark. 2008b. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proc of EMNLP.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Chinese parsing exploiting characters", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2013. Chinese parsing exploiting characters. In Proc of ACL.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "A Neural Probabilistic Structured-Prediction Model for Transition-Based Dependency Parsing", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shujian", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proc of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A Neural Probabilistic Structured- Prediction Model for Transition-Based Dependency Parsing. In Proc of ACL.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Character-based word embedding of the word party. This representation is used for both in-vocabulary and out-of-vocabulary words.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "visualizes a sample of the characterbased bidirectional LSTMs's learned representations (Chars). Clear clusters of past tense verbs, gerunds, and other syntactic classes are visible. The colors in the figure represent the most common POS tag for each word.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "On the x-axis is the OOV rate in development data, by treebank; on the y-axis is the difference in development-set LAS between Chars model as described in \u00a73.2 and one in which all OOV words are given a single representation. Character-based word representations of 30 random words from the English development set (Chars). Dots in red represent past tense verbs; dots in orange represent gerund verbs; dots in black represent present tense verbs; dots in blue represent adjectives; dots in green represent adverbs; dots in yellow represent singular nouns; dots in brown represent plural nouns. The visualization was produced using t-SNE; see http: //lvdmaaten.github.io/tsne/.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
UASLAS
Language Words CharsWords Chars + POS + POSLanguage Words CharsWords Chars + POS + POS
Arabic85.2186.0886.0586.07Arabic82.0583.4183.4683.40
Basque77.0685.1982.9285.22Basque66.6179.0973.5678.61
French83.7485.3486.1585.78French79.2280.9282.0381.08
German82.7586.8087.3387.26German79.1584.0484.6284.49
Hebrew77.6279.9380.6880.17Hebrew68.7171.2672.7072.26
Hungarian72.7880.3578.6480.92Hungarian61.9375.1969.3176.34
Korean78.7088.3986.8588.30Korean67.5086.2783.3786.21
Polish72.0183.4487.0685.97Polish63.9676.8479.8378.24
Swedish76.3979.1883.4383.24Swedish67.6971.1976.4074.47
Turkish71.7076.3275.3276.34Turkish54.5564.3461.2262.28
Chinese79.0179.9485.9685.30Chinese74.7976.2984.4083.72
English91.1691.4792.5791.63English88.4288.9490.3189.44
Average79.0185.3684.4184.68Average71.2278.1578.4379.21
", "num": null, "type_str": "table", "html": null, "text": "Unlabeled attachment scores (left) and labeled attachment scores (right) on the development sets (not a standard development set for Turkish). In each table, the first two columns show the results of the parser with word lookup (Words) vs. character-based (Chars) representations. The last two columns add POS tags. Boldface shows the better result comparing Words vs. Chars and comparing Words + POS vs. Chars + POS." }, "TABREF2": { "content": "", "num": null, "type_str": "table", "html": null, "text": "Unlabeled attachment scores (left) and labeled attachment scores (right) on the test sets. In each table, the first two columns show the results of the parser with word lookup (Words) vs. characterbased (Chars) representations. The last two columns add POS tags. Boldface shows the better result comparing Words vs. Chars and comparing Words + POS vs. Chars + POS." } } } }