{ "paper_id": "Q15-1026", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:08:03.391560Z" }, "title": "A Graph-based Lattice Dependency Parser for Joint Morphological Segmentation and Syntactic Analysis", "authors": [ { "first": "Wolfgang", "middle": [], "last": "Seeker", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": {} }, "email": "seeker@ims.uni-stuttgart.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Space-delimited words in Turkish and Hebrew text can be further segmented into meaningful units, but syntactic and semantic context is necessary to predict segmentation. At the same time, predicting correct syntactic structures relies on correct segmentation. We present a graph-based lattice dependency parser that operates on morphological lattices to represent different segmentations and morphological analyses for a given input sentence. The lattice parser predicts a dependency tree over a path in the lattice and thus solves the joint task of segmentation, morphological analysis, and syntactic parsing. We conduct experiments on the Turkish and the Hebrew treebank and show that the joint model outperforms three state-of-the-art pipeline systems on both data sets. Our work corroborates findings from constituency lattice parsing for Hebrew and presents the first results for full lattice parsing on Turkish.", "pdf_parse": { "paper_id": "Q15-1026", "_pdf_hash": "", "abstract": [ { "text": "Space-delimited words in Turkish and Hebrew text can be further segmented into meaningful units, but syntactic and semantic context is necessary to predict segmentation. At the same time, predicting correct syntactic structures relies on correct segmentation. We present a graph-based lattice dependency parser that operates on morphological lattices to represent different segmentations and morphological analyses for a given input sentence. The lattice parser predicts a dependency tree over a path in the lattice and thus solves the joint task of segmentation, morphological analysis, and syntactic parsing. We conduct experiments on the Turkish and the Hebrew treebank and show that the joint model outperforms three state-of-the-art pipeline systems on both data sets. Our work corroborates findings from constituency lattice parsing for Hebrew and presents the first results for full lattice parsing on Turkish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistic theory has provided examples from many different languages in which grammatical information is expressed via case marking, morphological agreement, or clitics. In these languages, configurational information is less important than in English since the words are overtly marked for their syntactic relations to each other. Such morphologically rich languages pose many new challenges to today's natural language processing technology, which has often been developed for English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One of the first challenges is the question on how to represent morphologically rich languages and what are the basic units of analysis (Tsarfaty et al., 2010) . The Turkish treebank (Oflazer et al., 2003) , for example, represents words as sequences of inflectional groups, semantically coherent groups of morphemes separated by derivational boundaries. The treebank for Modern Hebrew (Sima'an et al., 2001 ) chooses morphemes as the basic unit of representation. A space-delimited word in the treebank can consist of several morphemes that may belong to independent syntactic contexts.", "cite_spans": [ { "start": 136, "end": 159, "text": "(Tsarfaty et al., 2010)", "ref_id": "BIBREF52" }, { "start": 183, "end": 205, "text": "(Oflazer et al., 2003)", "ref_id": "BIBREF43" }, { "start": 386, "end": 407, "text": "(Sima'an et al., 2001", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Both Turkish and Hebrew show high amounts of ambiguity when it comes to the correct segmentation of words into inflectional groups and morphemes, respectively. Within a sentence, however, these ambiguities can often be resolved by the syntactic and semantic context in which these words appear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A standard (dependency) parsing system decides segmentation, morphological analysis (including POS), and syntax one after the other in a pipeline setup. While pipelines are fast and efficient, they cannot model interaction between these different levels of analysis, however. It has therefore been argued that joint modeling of these three tasks is more suitable to the problem (Tsarfaty, 2006) . In previous research, several transition-based parsers have been proposed to model POS/morphological tagging and parsing jointly (Hatori et al., 2011; Bohnet and Nivre, 2012; Bohnet et al., 2013) . Such parsing systems have been further extended to also solve the segmentation problem in Chinese (Hatori et al., 2012; Li and Zhou, 2012; Zhang et al., 2014) . Transition-based parsers are attractive since they do not rely on global optimization and thus deal well with the increased model complexity that comes with joint modeling. Nonetheless, graphbased models have been proposed as well, e.g. by Li et al. (2011) for joint POS tagging and dependency parsing. Their parsers model the joint problem directly at the cost of increased model complexity.", "cite_spans": [ { "start": 378, "end": 394, "text": "(Tsarfaty, 2006)", "ref_id": "BIBREF54" }, { "start": 526, "end": 547, "text": "(Hatori et al., 2011;", "ref_id": "BIBREF27" }, { "start": 548, "end": 571, "text": "Bohnet and Nivre, 2012;", "ref_id": "BIBREF2" }, { "start": 572, "end": 592, "text": "Bohnet et al., 2013)", "ref_id": "BIBREF3" }, { "start": 693, "end": 714, "text": "(Hatori et al., 2012;", "ref_id": "BIBREF28" }, { "start": 715, "end": 733, "text": "Li and Zhou, 2012;", "ref_id": "BIBREF30" }, { "start": 734, "end": 753, "text": "Zhang et al., 2014)", "ref_id": "BIBREF55" }, { "start": 996, "end": 1012, "text": "Li et al. (2011)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a graph-based dependency parser for lattice parsing that handles the increased complexity by applying dual decomposition. The parser operates on morphological lattices and predicts word segmentation, morphological analysis, and dependency syntax jointly. It decomposes the problem into several subproblems and uses dual decomposition to find a common solution (Koo et al., 2010; Martins et al., 2010) . The subproblems are defined such that they can be solved efficiently and agreement is found in an iterative fashion. Decomposing the problem thus keeps the complexity of the joint parser on a tractable level.", "cite_spans": [ { "start": 386, "end": 404, "text": "(Koo et al., 2010;", "ref_id": "BIBREF29" }, { "start": 405, "end": 426, "text": "Martins et al., 2010)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test the parser on the Turkish and the Hebrew treebank. The segmentation problem in these languages can be tackled with the same approach even though their underlying linguistic motivation is quite different. In our experiments, the lattice dependency parser outperforms three state-of-the-art pipeline systems. Lattice parsing for Hebrew has been thoroughly investigated in constituency parsing (Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008; Goldberg and Elhadad, 2013) , demonstrating the viability of joint modeling. To the best of our knowledge, our work is the first to apply full lattice parsing to the Turkish treebank.", "cite_spans": [ { "start": 399, "end": 422, "text": "(Cohen and Smith, 2007;", "ref_id": "BIBREF10" }, { "start": 423, "end": 451, "text": "Goldberg and Tsarfaty, 2008;", "ref_id": "BIBREF24" }, { "start": 452, "end": 479, "text": "Goldberg and Elhadad, 2013)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce the segmentation problem in Turkish and Hebrew in Section 2 and present the lattice parser in Section 3. Sections 4 and 5 describe the experiments and their results and we discuss related work in Section 6. We conclude with Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A lot of morphosyntactic information is overtly marked on words in morphologically rich languages. It is also common to express syntactic information through derivation or composition. As a consequence these words, orthographically written together, actually have word-internal syntactic struc-tures. Moreover, word-external relations may depend on the word-internal structures, e.g., a word could be grammatically related to only parts of another word instead of the whole. For instance, in the Turkish sentence ekmek ald\u0131m, each word has two analyses. ekmek means 'bread' or the nominal 'planting' which is derived from the verb stem ek 'plant' with the nominalization suffix mek. ald\u0131m has the meaning 'I bought' which decomposes as al-d\u0131-m 'buy-Past-1sg'. It also means 'I was red', which is derived from the adjective al 'red', inflected for past tense, 1st person singular.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Segmentation in Turkish and Hebrew", "sec_num": "2" }, { "text": "Depending on the selected morphological analysis for each word, syntax and semantics of the sentence change. When the first analysis is selected for both words, the syntactic representation of the sentence is given in Figure 1 , which corresponds to the meaning 'I bought bread'. When the nominal 'planting' is selected for the first word, it is a grammatical sentence albeit with an implausable meaning. When the derivational analysis of the second word is selected, regardless of the morphological analysis of the first word, the sentence is ungrammatical due to subject-verb agreement failure. Although all morphological analyses for these two words are correct in isolation, when they occur in the same syntactic context only some combinations are grammatical.", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 226, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word Segmentation in Turkish and Hebrew", "sec_num": "2" }, { "text": "OBJ I bought bread. This small example demonstrates that the syntactic structure depends on the morphological disambiguation of the words. At the same time, it shows that syntax can help pick the right morphological analysis. For a joint system to decide the morphological and syntactic representation together, all possible analyses must be available to the system. The possible morphological analyses of a word can be efficiently represented in a lattice structure. The lattice representation of the sentence in Figure 1 is given in Figure 2 , with double circles denoting word bound-aries. A sentence lattice is the concatenation of its word lattices. A morphological analysis of a word is a full path from the initial state to the final state of its lattice. Labels on the transitions are the surface form and underlying morphological representation of segments. 1 Lattices also capture well the segmentation of words in Hebrew. Different from Turkish, Hebrew segments can be syntactic units like determiners, prepositions, or relativizers attached to stem segments. In an example given by Goldberg and Tsarfaty (2008) , the word hneim 'the pleasant/madepleasant' has three analyses corresponding to the lattice in Figure 3 . (Goldberg and Tsarfaty, 2008) .", "cite_spans": [ { "start": 1094, "end": 1122, "text": "Goldberg and Tsarfaty (2008)", "ref_id": "BIBREF24" }, { "start": 1230, "end": 1259, "text": "(Goldberg and Tsarfaty, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 514, "end": 522, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 535, "end": 543, "text": "Figure 2", "ref_id": "FIGREF2" }, { "start": 1219, "end": 1227, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "ekmek ald\u0131m Noun+Nom Verb+Past+1sg", "sec_num": null }, { "text": "Both the Hebrew and the Turkish treebank annotate dependencies between units smaller than words. In the Turkish treebank, a space-delimited word is segmented into one or more segments depending on its morphological representation. The number of segments is determined by the number of derivations. If it was derived n times, it is represented as n+1 segments. The derivational boundaries are part of the morphological representation. In the Turkish dependency parsing literature (Eryigit et al., 2008 ; \u00c7 etinoglu and Kuhn, 2013) these segments 1 Surface forms on the transitions are given for convenience. In the Turkish treebank, only final segments have surface forms (of full words), the surface forms of non-final segments are represented as underscores.", "cite_spans": [ { "start": 479, "end": 500, "text": "(Eryigit et al., 2008", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "ekmek ald\u0131m Noun+Nom Verb+Past+1sg", "sec_num": null }, { "text": "are called inflectional groups (IGs). IGs consist of one or more inflectional morphemes. The head of a non-final IG is the IG to its right with a dependency relation DERIV. The head of a final IG could be any IG of another word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ekmek ald\u0131m Noun+Nom Verb+Past+1sg", "sec_num": null }, { "text": "The Hebrew treebank defines relations between morphemes (Sima'an et al., 2001) . Those morphemes correspond to what is usually considered a separate syntactic unit in English. In Hebrew script, word classes like prepositions and conjunctions are always written together with the following word. Contrary to Turkish, syntactic heads of both nonfinal and final segments can be internal or external to the same space-delimited word.", "cite_spans": [ { "start": 56, "end": 78, "text": "(Sima'an et al., 2001)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "ekmek ald\u0131m Noun+Nom Verb+Past+1sg", "sec_num": null }, { "text": "For convenience, we will use token to refer to the smallest unit of processing for the remainder of the paper. It corresponds to IGs in Turkish and morphemes in Hebrew. A transition in a morphological lattice therefore represents one token. We will use word to refer to space-delimited words. 2 In standard parsing, these two terms usually coincide with a token in a sentence being separated from the surrounding ones by space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ekmek ald\u0131m Noun+Nom Verb+Past+1sg", "sec_num": null }, { "text": "One can think of lattice parsing as two tasks that the parser solves simultaneously: the parser needs to find a path through the lattice and it needs to find a parse tree. Importantly, the parser solves this task under the condition that the parse tree and the path agree with each other, i.e. the tokens that the parse tree spans over must form the path through the lattice. Decomposing the problem in this way defines the three components for the parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Let x be an input lattice and T = {ROOT, t 1 , t 2 , . . . , t n } be the set of tokens in x. In what follows, we assume two different structures, lattices and dependency trees. Dependency trees are represented as directed acyclic trees with a special root node (ROOT), whereas lattices are directed acyclic graphs with one defined start state and one defined end state (see Figures 2 and 3 ). For dependency trees, we will use the terms node and arc to refer to the vertices and the edges between the vertices, respectively. Tokens are represented as nodes in the dependency tree. For lattices, we use the terms state and transition to refer to the vertices and their edges in the lattice. Tokens are represented as transitions between states in the lattice.", "cite_spans": [], "ref_spans": [ { "start": 375, "end": 390, "text": "Figures 2 and 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Find The Path. A token bigram in a lattice x is a pair of two transitions t, t , such that the target state of t in x coincides with the source state of t in x. A chain of overlapping bigrams that starts from the initial state and ends in the final state forms a path through the lattice. We represent the ROOT token as the first transition, i.e. a single transition that leaves the initial state of the lattice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Given a lattice x, we define the index set of token bigrams in the lattice to be S :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "= { t, t | t, t \u2208 T, target(x, t) = source(x, t ) }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "For later, we furthermore define S |t := { k, t | k, t \u2208 S, k \u2208 T } to be the set of bigrams that have t at the second position. A consecutive path through the lattice is defined as an indicator vector p := p s s\u2208S where p s = 1 means that bigram s is part of the path, otherwise p s = 0. We define P as the set of all wellformed paths, i.e. all paths that lead from the initial to the final state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "We use a linear model that factors over bigrams. Given a scoring function f P that assigns scores to paths, the path with the highest score can be found byp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "= arg max p\u2208P f P (p) with f P (p) = s\u2208S p s w \u2022 \u03c6 SEG (s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "where \u03c6 SEG is the feature extraction function for token bigrams. The highest-scoring path through the lattice can be found with the Viterbi algorithm. We use this bigram model later also as a standalone disambiguator for morphological lattices to find the highest-scoring path in a lattice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Find The Tree. We define the index set of arcs in a dependency tree as A :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "= { h, d, l | h \u2208 T, d \u2208 T \u2212 {ROOT}, l \u2208 L, h = d } with L being a set of dependency relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "A dependency tree is defined as an indicator vector y := y a a\u2208A where y a = 1 means that arc a is in the parse, otherwise y a = 0. We define Y to be the set of all well-formed dependency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "We follow Koo et al. (2010) and assume an arcfactored model (McDonald et al., 2005) to find the highest-scoring parse. Given a scoring function f T that assigns scores to parses, the problem of finding the highest scoring parse is defined a\u015d", "cite_spans": [ { "start": 10, "end": 27, "text": "Koo et al. (2010)", "ref_id": "BIBREF29" }, { "start": 60, "end": 83, "text": "(McDonald et al., 2005)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "y = arg max y\u2208Y f T (y) with f T (y) = a\u2208A y a w \u2022 \u03c6 ARC (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "where \u03c6 ARC is the feature extraction function for single arcs and w is the weight vector. We use the Chu-Liu-Edmonds algorithm (CLE) to find the highestscoring parse (Chu and Liu, 1965; Edmonds, 1967) . Note that the algorithm includes all tokens of the lattice into the spanning tree, not just some tokens on some path. Chu-Liu-Edmonds furthermore enforces the tree properties of the output, i.e. acyclicity and exactly one head per token.", "cite_spans": [ { "start": 167, "end": 186, "text": "(Chu and Liu, 1965;", "ref_id": "BIBREF9" }, { "start": 187, "end": 201, "text": "Edmonds, 1967)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Agreement Constraints. To make the path and the parse tree agree with each other, we introduce an additional dependency relation NOREL into L. We define a token that is attached to ROOT with relation NOREL to be not on the path through the lattice. These arcs are not scored by the statistical model, they simply serve as a means for CLE to mark tokens as not being part of the path by attaching them to ROOT with this relation. The parser can predict the NOREL label only on arcs attached to root.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "We introduce two agreement constraints to ensure that (i) all tokens not on the path are marked with NOREL and must be attached to ROOT and (ii) tokens cannot be dependents of tokens marked with NOREL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "The first constraint is implemented as an XOR (\u2295) factor (Martins et al., 2011b) over token bigrams and arcs. It states that for a token t, either one of its bigrams 3 or its NOREL-arc must be active. There is one such constraint for each token in the lattice.", "cite_spans": [ { "start": 57, "end": 80, "text": "(Martins et al., 2011b)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "s\u2208S |t p s \u2295 y ROOT,t,NOREL for all t \u2208 T (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "The second constraint ensures that a token that is part of the path will not be attached to a token that is not. It thus guarantees the coherence of the dependency tree over the path through the lattice. It is implemented as an implication (=\u21d2) factor (Martins et al., 2015) . It states that an active NOREL arc for a token h implies an inactive arc for all arcs having h as head. There is one such constraint for each possible arc in the parse.", "cite_spans": [ { "start": 252, "end": 274, "text": "(Martins et al., 2015)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "y ROOT,h,NOREL =\u21d2 \u00acy h,d,l (2) for all h, d, l \u2208 A, h = ROOT, l = NOREL", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Deciding on a path through the lattice partitions the tokens into two groups: the ones on the path and the ones that are not. By means of the NOREL label, the CLE is also able to partition the tokens into two groups: the ROOT-NOREL tokens and the proper dependency tree tokens. The two agreement constraints then make sure that the two partionings agree with each other. The first constraint explicitly links the two partitionings by requiring each token to either belong to the path or to the ROOT-NOREL tokens. The second constraint ensures that the partitioning by the CLE is consistent, i.e. tokens attached to ROOT with NOREL cannot mix with the other tokens in the tree structure. Before the parser outputs the parse the tokens that do not belong to the path/tree are discarded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "The objective function of the lattice parser is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "arg max y\u2208Y,p\u2208P f T (y) + f P (p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "subject to the two agreement constraints in Equations (1) and (2). We use Alternating Directions Dual Decomposition or AD 3 (Martins et al., 2011a) 4 to find the optimal solution to this constrained optimization problem. CLE can be implemented such that its worst case complexity is O(T 2 ), while the Viterbi algorithm needed to find the path is of worst case complexity O(QT 2 ), where Q is the number of states in the lattice. Instead of combining these two problems directly, which would multiply their complexity, AD 3 combines them additively, such that the complexity of the parser is O(k(T 2 + QT 2 )) with k being the number of iterations that AD 3 is run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Second-order Parsing. To facilitate second-order features, we use grandparent-sibling head automata as proposed in Koo et al. (2010) , which we extend to include dependency relations. The head automata allow the parser to model consecutive sibling and grandparent relations. The architecture of the parser does not need to be changed at all to include the second-order factors. The head automata are simply another component. They compute solutions over the same set of arc indicator variables as the CLE and AD 3 thus ensures that the output of the two algorithms agrees on the tree structure (Koo et al., 2010) . The second-order factors dominate the complexity of the entire parser, since solving the head automata is of complexity O(T 4 L).", "cite_spans": [ { "start": 115, "end": 132, "text": "Koo et al. (2010)", "ref_id": "BIBREF29" }, { "start": 594, "end": 612, "text": "(Koo et al., 2010)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Pruning. We use rule-based and heuristics-based pruning to reduce the search space of the parser. Arcs between tokens that lie on competing paths through the lattice are cut away as these tokens can never be in a syntactic relation. For the Turkish treebank, we introduce an additional rule based on the annotation scheme of the treebank. In the treebank, the IGs of a word form a chain with each IG having their head immediately to the right and only the last IG choosing the head freely. For the non-final IGs, we therefore restrict the head choice to all IGs that can immediately follow it in the lattice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "In order to restrict the number of heads, we train a simple pairwise classifier that predicts the 10 best heads for each token. It uses the first-order features of the parser's feature model. Feature Model. The parser extracts features for bigrams (path), arcs (first-order), consecutive siblings, and grandparent relations (both second order). It uses standard features like word form, lemma, POS, morphological features, head direction, and combinations thereof.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Context features are more difficult in lattice parsing than in standard parsing as the left and right context of a token is not specified before parsing. We first extracted context features from all tokens that can follow or precede a token in the lattice. This led to overfitting effects as the model was learning specific lattice patterns. We therefore use latent left and right context and extract features from only one of the left/right neighbor tokens. The latent context is the left/right context token with the highest score from the path features (raw bigram scores, they are not changed by AD 3 ). The parser extracts context from one token in each direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Distance features are also more difficult in lattices since the linear distance between two tokens depends on the actual path chosen by the parser. We define distance simply as the length of the shortest path between two tokens in the lattice, but this distance may not coincide with the actual path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Context features and distance features show that lattice dependency parsing poses interesting new challenges to feature design. Using latent context features is one way of handling uncertain context, compare also the delayed features in Hatori et al. (2011) . A thorough investigation of different options is needed here.", "cite_spans": [ { "start": 237, "end": 257, "text": "Hatori et al. (2011)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "Learning. We train a discriminative linear model using passive-aggressive online learning (Crammer et al., 2003) with cost-augmented inference (Taskar et al., 2005) and parameter averaging (Freund and Schapire, 1999) . We use Hamming loss over the arcs of the parse tree excluding NOREL arcs. The model trains one parameter vector that includes features from the tree and from the path.", "cite_spans": [ { "start": 90, "end": 112, "text": "(Crammer et al., 2003)", "ref_id": "BIBREF11" }, { "start": 143, "end": 164, "text": "(Taskar et al., 2005)", "ref_id": "BIBREF50" }, { "start": 189, "end": 216, "text": "(Freund and Schapire, 1999)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "The maximum number of iterations of AD 3 is set to 1000 during training and testing. The algorithm sometimes outputs fractional solutions. During training, the model is updated with these fractional solutions, weighting the features and the loss accordingly. During testing, fractional solutions are projected to an integer solution by first running the best-path algorithm with the path posteriors output by AD 3 and afterwards running CLE on the selected path weighted by the arc posteriors (Martins et al., 2009) . In the experiments, fractional solutions occur in about 9% of the sentences in the Turkish development set during testing.", "cite_spans": [ { "start": 493, "end": 515, "text": "(Martins et al., 2009)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Lattice Parsing", "sec_num": "3" }, { "text": "The training set for Turkish is the 5,635 sentences of the METU-Sabanc\u0131 Turkish Treebank (Oflazer et al., 2003) . The 300 sentences of the ITU validation set (Eryigit, 2012) are used for testing. As there is no separate development set, we split the training set into 10 parts and used 2 of them as development data. All models run on this development set are trained on the remaining 8 parts. We also report results from 10-fold crossvalidation on the full training set (10cv).", "cite_spans": [ { "start": 89, "end": 111, "text": "(Oflazer et al., 2003)", "ref_id": "BIBREF43" }, { "start": 158, "end": 173, "text": "(Eryigit, 2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "The Turkish Data", "sec_num": "4.1" }, { "text": "We use the detached version of the Turkish treebank (Eryigit et al., 2011) where multiword expressions are represented as separate tokens. The training set of this version contains 49 sentences with loops. We manually corrected these sentences and use the corrected version in our experiments. 5 The Turkish raw input is first passed through a morphological analyzer (Oflazer, 1994) in order to create morphological lattices as input to the parser. Gold analyses are added to the training lattices if the morphological analyzer failed to output the correct analyses.", "cite_spans": [ { "start": 52, "end": 74, "text": "(Eryigit et al., 2011)", "ref_id": "BIBREF17" }, { "start": 294, "end": 295, "text": "5", "ref_id": null }, { "start": 367, "end": 382, "text": "(Oflazer, 1994)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "The Turkish Data", "sec_num": "4.1" }, { "text": "For the pipeline systems, the input lattices are disambiguated by running a morphological disambiguator. We train our own disambiguator using the bigram model from the parser and find the best path through the lattice with the Viterbi algorithm. The disambiguator uses the same bigram features as the lattice parser. The morphological disambiguator is trained on the Turkish treebank as in \u00c7 etinoglu (2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Turkish Data", "sec_num": "4.1" }, { "text": "The data for Hebrew comes from the SPMRL Shared Task 2014 (Seddah et al., 2014) , which is based on the treebank for Modern Hebrew (Sima'an et al., 2001 ). It provides lattices and predisambiguated input files. The training and development lattices contained a number of circular structures due to self-loops in some states. We automatically removed the transitions causing these cycles.", "cite_spans": [ { "start": 58, "end": 79, "text": "(Seddah et al., 2014)", "ref_id": "BIBREF48" }, { "start": 131, "end": 152, "text": "(Sima'an et al., 2001", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "The Hebrew Data", "sec_num": "4.2" }, { "text": "Input lattices for training were prepared as for Turkish by adding the gold standard paths if necessary. Compared to the Turkish data, the Hebrew lattices are so large that training times for the lattice parser became unacceptable. We therefore used our morphological disambiguator to predict the 10 best paths for each lattice. All transitions in the lattice that were not part of one of these 10 paths were discarded. Note that the number of actual paths in these pruned lattices is much higher than 10, since the paths converge after each word. All experiments with the joint model for Hebrew are conducted on the pruned lattices. As for Turkish we preprocess the input lattices for all baselines with our own morphological disambiguator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Hebrew Data", "sec_num": "4.2" }, { "text": "We compare the lattice parser (JOINT for Turkish, JOINT10 for Hebrew) to three baselines: MATE, TURBO, and PIPELINE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "The first two baseline systems are off-the-shelf dependency parsers that currently represent the state-of-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "Mate parser 6 (Bohnet, 2009; Bohnet, 2010) is a graph-based dependency parser that uses Carreras' decoder (Carreras, 2007) and approximate search (McDonald and Pereira, 2006) to produce non-projective dependency structures. Tur-boParser 7 ) is a graph-based parser that uses a dual decomposition approach and outputs non-projective structures natively. The third baseline system runs the lattice parser on a predisambiguated lattice, i.e. in a pipeline setup.", "cite_spans": [ { "start": 106, "end": 122, "text": "(Carreras, 2007)", "ref_id": "BIBREF6" }, { "start": 146, "end": 174, "text": "(McDonald and Pereira, 2006)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "All three baselines are pipeline setups and use the same disambiguator to predict a path through the lattice. The bigram features in the disambiguator are the same as in the joint model. There is thus no difference between the lattice parser and the baselines with respect to the features that are available during segmentation. As opposed to lattice parsing, baseline systems are trained on the gold standard segmentation (and thus gold morphological analyses) in the training data, since automatically predicted paths would not guarantee to be compatible with the gold dependency structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "The purpose of the first two baselines is to compare the joint parser to the current state-of-the-art. However, the feature sets are different between the joint parser and the off-the-shelf baselines. A difference in performance between the joint parser and the first two baseline systems may thus simply be caused by a difference in the feature set. The third baseline eliminates this difference in the feature sets since it is the actual lattice parser that is run on a disambiguated lattice. Because the morphological disambiguator for the PIPELINE baseline is using the same feature set as the lattice parser (the bigram model), the fact that the joint parser is trained and tested on full lattices is the only difference between these two systems. The PIPELINE baseline thus allows us to test directly the effect of joint decoding compared to a pipeline setup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "Standard labeled and unlabeled attachment scores are not applicable when parsing with uncertain segmentation since the number of tokens in the output of the parser may not coincide with the number of tokens in the gold standard. Previous work therefore suggests alternative methods for evaluation, e.g. by means of precision, recall, and f-score over tokens, see e.g. Tsarfaty (2006) or Cohen and Smith (2007) .", "cite_spans": [ { "start": 368, "end": 383, "text": "Tsarfaty (2006)", "ref_id": "BIBREF54" }, { "start": 387, "end": 409, "text": "Cohen and Smith (2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "The uncertainty of segmentation furthermore makes it very hard to evaluate the other levels of analysis independently of the segmentation. In order to decide whether the morphological analysis of a token (or its syntactic attachment) is correct, one always needs to find out first to which token in the gold standard it corresponds. By establishing this correspondence, the segmentation is already being evaluated. Evaluating syntax isolated from the other levels of analysis is therefore not possible in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "Hatori et al. (2012) count a dependency relation correct only when both the head and the dependent have the correct morphological analysis (here POS) and segmentation. Goldberg (2011, page 53) proposes a similar approach, but only requires surface forms to match between gold standard and prediction. These metrics compute precision and recall over tokens. Eryigit et al. (2008) and Eryigit (2012) define an accuracy (IGeval) for Turkish parsing by taking advantage of the annotation scheme in the Turkish treebank: A non-final IG in the Turkish treebank always has its head immediately to the right, always with the same label, which makes it possible to ignore the inner dependency relations, i.e. the segmentation, of a dependent word. The metric therefore only needs to check for each word whether the head of the last IG is attached to the correct IG in another word. The metric includes a back-off strategy in case the head word's segmentation is wrong. A dependency arc is then counted as correct if it attaches to an IG in the correct word and the POS tag of the head IG is the same as in the gold standard.", "cite_spans": [ { "start": 357, "end": 378, "text": "Eryigit et al. (2008)", "ref_id": "BIBREF16" }, { "start": 383, "end": 397, "text": "Eryigit (2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "Parsing Evaluation. We follow Hatori et al. (2012) and use a strict definition of precision and recall (PREC, REC, F1) over tokens to evaluate the full task. We first align the tokens of a word in the parser output with the tokens of the corresponding word in the gold standard using the Needleman-Wunsch algorithm (Needleman and Wunsch, 1970) , which we modify so it does not allow for mismatches. A token in the parser output that is not in the gold standard is thus paired with a gap and vice versa. Two tokens must have the same morphological analysis in order to match. 8 A true positive is defined as a pair of matching tokens whose heads are also aligned and match. For labeled scores, the dependency relations must match as well. Precision is defined as the number of true positives over the number of tokens in the prediction, recall is defined as the number of true positives over the number of tokens in the gold standard. F-score is the harmonic mean of precision and recall.", "cite_spans": [ { "start": 30, "end": 50, "text": "Hatori et al. (2012)", "ref_id": "BIBREF28" }, { "start": 315, "end": 343, "text": "(Needleman and Wunsch, 1970)", "ref_id": "BIBREF42" }, { "start": 575, "end": 576, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "This metric is very strict and requires all levels of analysis to be correct. In order to evaluate the syntax as independently as possible, we furthermore report IGeval for Turkish, with and without the aforementioned backoff strategy (IGeval and IGeval STRICT). For Hebrew, we report on a version of precision and recall as defined above that only requires the surface forms of the tokens to match. 9 This metric is almost the one proposed in Goldberg (2011) . All reported evaluation metrics ignore punctuation.", "cite_spans": [ { "start": 444, "end": 459, "text": "Goldberg (2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "We do not use TedEval as defined in Tsarfaty et al. (2012) even though it has been used previously to evaluate dependency parsing with uncertain segmentation (Seddah et al., 2013; Zhang et al., 2015) . The reason is that it is not an inherently dependency-based framework and the conversion from constituency structures to dependency structures interferes with the metric. 10 The metric 8 The method does not create cross, many-to-one, or one-tomany alignments, which can be important because in very rare cases the same token occurs twice in one word. 9 The metric would not work for Turkish, as the surface forms of non-final IGs are all represented as underscores.", "cite_spans": [ { "start": 36, "end": 58, "text": "Tsarfaty et al. (2012)", "ref_id": "BIBREF53" }, { "start": 158, "end": 179, "text": "(Seddah et al., 2013;", "ref_id": "BIBREF47" }, { "start": 180, "end": 199, "text": "Zhang et al., 2015)", "ref_id": "BIBREF56" }, { "start": 373, "end": 375, "text": "10", "ref_id": null }, { "start": 387, "end": 388, "text": "8", "ref_id": null }, { "start": 553, "end": 554, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "10 As an experiment, we took a Turkish treebank tree and created artificial parses by attaching one token to a different head each time. All other tokens remained attached to their correct head, and segmentation is kept gold. This gave us 11 parses that contained exactly one attachment error and one parse identical with the gold standard. Running TedEval on each of the proposed in Goldberg (2011) implements the same ideas without edit distance and is defined directly for dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "Segmentation Evaluation. We use the same token-based precision and recall to measure the quality of segmentation and morphological analysis without syntax. For a token to be correct, it has to have the same morphological analysis as the token in the gold standard to which it is aligned. We furthermore report word accuracy (ACC w ), which is the percentage of words that received the correct segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4.4" }, { "text": "Segmentation and Morphology. Table 1 shows the quality of segmentation and morphological analysis. The baseline for Turkish is the Turkish morphological disambiguator by Sak et al. (2008) , trained on the Turkish treebank. For Hebrew, the baseline is the disambiguated lattices provided by the SPMRL 2014 Shared Task. 11 The bigram model is our own morphological disambiguator. The joint model is the full lattice parser, which has access to syntactic information.", "cite_spans": [ { "start": 170, "end": 187, "text": "Sak et al. (2008)", "ref_id": "BIBREF46" } ], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The results show that the bigram model is clearly outperforming the baselines for both languages. The feature model of the bigram model was developed on the Turkish development set, but the model also works well for Hebrew. Comparing the bigram model to the joint model shows that overall, the joint model performs better than the bigram model. However, the joint model mainly scores in recall rather than in precision, the bigram model is even ahead of the joint model in precision for Hebrew. The joint model outperforms the bigram model and the baseline also in word accuracy. The results demonstrate that syntactic information is relevant to resolve ambiguity in segmentation and morphology for Turkish and Hebrew. Turkish. Table 2 presents the results of the evaluation of the three baseline systems and the lattice parser on the Turkish data. The PIPELINE and the JOINT system give better results than the other two baselines across the board. This shows that the feature set of the lattice parser is better suited to the Turkish treebank than the feature set of Mate parser and Turbo parser. It is not a surprising result though, since the lattice parser was developed for Turkish whereas the other two parsers were developed for other treebanks.", "cite_spans": [], "ref_spans": [ { "start": 728, "end": 735, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The JOINT system outperforms the PIPELINE system with respect to the first three metrics. These metrics evaluate syntax, segmentation, and morphological analysis jointly. Higher scores here mean that these aspects in combination have become better. The differences between the PIPELINE and the JOINT model are consistently statistically significant with respect to recall, but only in some cases with re-spect to precision. The syntactic information that is available to the joint model thus seems to improve recall rather than precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The last two columns in Table 2 show an evaluation using IGeval. The IGeval metric is designed to evaluate the syntactic quality with less attention to morphological analysis and segmentation. Here, both PIPELINE and JOINT achieve very similar results and none of the differences is statistical significant. These results suggest that a good part of the improvements in the lattice parser occurs in the morphological analysis/segmentation, whereas the quality of syntactic annotation basically stays the same between the pipeline and the joint model.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 31, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Hebrew. The experimental results on the Hebrew data are shown in Table 3 . The three baselines perform very similarly. All three baseline systems are run on the output of the same disambiguator, which means that the feature models of the parsers seem to be equally well suited to the Hebrew treebank. The feature model of the lattice parser that is used in the PIPELINE baseline was not adapted to Hebrew in any way, but was used as it was developed for the Turkish data.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Compared to the three baselines, the joint model outperforms them for both labeled and unlabeled scores. As the only difference between PIPELINE and JOINT is the fact that the latter performs joint decoding, the results support the findings in constituency parsing by Tsarfaty (2006) , Cohen and Smith (2007) , and Goldberg and Tsarfaty (2008) , namely that joint decoding is a better model for Hebrew parsing. Judging from statistical significance, the JOINT model improves recall rather than precision, a picture that we found for Turkish as well. Table 3 : Statistically significant differences between the joint system and the pipeline system are marked with \u2020 (p < 0.01) and * (p < 0.05). Significance testing was performed using the Wilcoxon Signed Rank Test (not for F1).", "cite_spans": [ { "start": 268, "end": 283, "text": "Tsarfaty (2006)", "ref_id": "BIBREF54" }, { "start": 286, "end": 308, "text": "Cohen and Smith (2007)", "ref_id": "BIBREF10" }, { "start": 315, "end": 343, "text": "Goldberg and Tsarfaty (2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 550, "end": 557, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "As described in Section 4.4, we cannot evaluate the syntax entirely independently on Hebrew, but we can eliminate the morphological level. Table 4 shows the results of the evaluation when only syntax and surface forms are matched. The overall picture compared to the evaluation shown in Table 3 does not change, however. Also when disregarding the quality of morphology, the JOINT model outperforms the PIPELINE, notably with respect to recall.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 146, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 287, "end": 294, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Graph-based Parsing. Our basic architecture resembles the joint constituency parsing and POS tagging model by needs additional constraints to enforce agreement between the two tasks. Martins et al. (2011a) and Martins et al. (2015) show how such first-order logic constraints can be represented as subproblems in dual decomposition. Similar approaches, where such constraints are used to ensure certain properties in the output structures, have been used e.g. in semantic parsing (Das et al., 2012) , compressive summarization , and joint quotation attribution and coreference resolution (Almeida et al., 2014) . Parsers that use dual decomposition are proposed in Koo et al. (2010) and Martins et al. (2010) . From Koo et al. (2010) , we adopted the idea of using the Chu-Liu-Edmonds algorithm to ensure tree properties in the output as well as second-order parsing with head automata. Li et al. (2011) extend several higher-order variants of the Eisner decoder (Eisner, 1997) such that POS tags are predicted jointly with syntax. The complexity of their joint models increases by polynomials of the tag set size. Due to the dual decomposition approach, the complexity of our parser stays equal to the complexity of the most complex subproblem, which is the second-order head automata in our case.", "cite_spans": [ { "start": 183, "end": 205, "text": "Martins et al. (2011a)", "ref_id": "BIBREF35" }, { "start": 210, "end": 231, "text": "Martins et al. (2015)", "ref_id": "BIBREF39" }, { "start": 480, "end": 498, "text": "(Das et al., 2012)", "ref_id": "BIBREF12" }, { "start": 588, "end": 610, "text": "(Almeida et al., 2014)", "ref_id": "BIBREF1" }, { "start": 665, "end": 682, "text": "Koo et al. (2010)", "ref_id": "BIBREF29" }, { "start": 687, "end": 708, "text": "Martins et al. (2010)", "ref_id": "BIBREF34" }, { "start": 716, "end": 733, "text": "Koo et al. (2010)", "ref_id": "BIBREF29" }, { "start": 887, "end": 903, "text": "Li et al. (2011)", "ref_id": "BIBREF31" }, { "start": 963, "end": 977, "text": "(Eisner, 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Transition-based Parsing. Joint models in transition-based parsing usually introduce a variant of the shift transition that performs the additional task, e.g. it additionally predicts the POS tag and possibly morphological features of a token that is being shifted (Hatori et al., 2011; Bohnet and Nivre, 2012; Bohnet et al., 2013) . Optimization over the joint model is achieved by beam search. To also solve the word segmentation task, several models for Chinese were proposed that parse on the level of single characters, forming words from characters with a special append transition (Hatori et al., 2012; Li and Zhou, 2012) or predicting word internal structure along with syntax (Zhang et al., 2014) . To use such a transition-based system for the segmentation task in Turkish or Hebrew, the shift transition would have to be changed to do the opposite of the append transition in the Chinese parsers: segment an incoming token into several ones, for example based on the output of a morphological analyzer.", "cite_spans": [ { "start": 265, "end": 286, "text": "(Hatori et al., 2011;", "ref_id": "BIBREF27" }, { "start": 287, "end": 310, "text": "Bohnet and Nivre, 2012;", "ref_id": "BIBREF2" }, { "start": 311, "end": 331, "text": "Bohnet et al., 2013)", "ref_id": "BIBREF3" }, { "start": 588, "end": 609, "text": "(Hatori et al., 2012;", "ref_id": "BIBREF28" }, { "start": 610, "end": 628, "text": "Li and Zhou, 2012)", "ref_id": "BIBREF30" }, { "start": 685, "end": 705, "text": "(Zhang et al., 2014)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Easy-first Parsing. Ma et al. (2012) introduce a variant of the easy-first parser (Goldberg and Elhadad, 2010a ) that uses an additional operation to POS tag input tokens. The operations are ordered such that the parser can only introduce a dependency arc between two tokens that have received a POS tag already. Tratz (2013) presents a similar system for Arabic that defines several more operations to deal with segmentation ambiguity.", "cite_spans": [ { "start": 20, "end": 36, "text": "Ma et al. (2012)", "ref_id": "BIBREF32" }, { "start": 82, "end": 110, "text": "(Goldberg and Elhadad, 2010a", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Sampling-based Parsing. Zhang et al. (2015) present a joint model that relies on sampling and greedy hill-climbing for decoding, but allows for arbitrarily complex scoring functions thus opening access to global and cross-level features. Such features could be simulated in our model by adding additional factors in the form of soft constraints (constraints with output, see Martins et al. (2015) ), but this would introduce a considerable number of additional factors with a notable impact on performance.", "cite_spans": [ { "start": 24, "end": 43, "text": "Zhang et al. (2015)", "ref_id": "BIBREF56" }, { "start": 375, "end": 396, "text": "Martins et al. (2015)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Constituency Parsing. Joint models have also been investigated in constituency parsing, notably for Hebrew. Tsarfaty (2006) already discusses full joint models, but the first full parsers were presented in Cohen and Smith (2007), Goldberg and Tsarfaty (2008) , and later Goldberg and Elhadad (2013) . Green and Manning (2010) present a similar parser for Arabic. Among these, some authors emphasize the importance of including scores from the morphological model into the parsing model, whereas other models do not use them at all. In our parser, the model is trained jointly for both tasks without weighting the two tasks differently.", "cite_spans": [ { "start": 108, "end": 123, "text": "Tsarfaty (2006)", "ref_id": "BIBREF54" }, { "start": 230, "end": 258, "text": "Goldberg and Tsarfaty (2008)", "ref_id": "BIBREF24" }, { "start": 271, "end": 298, "text": "Goldberg and Elhadad (2013)", "ref_id": "BIBREF23" }, { "start": 301, "end": 325, "text": "Green and Manning (2010)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Parsing Hebrew and Turkish. Joint models for Hebrew parsing were mostly investigated for constituency parsing (see above). There has been some work specifically on Hebrew dependency parsing (Goldberg and Elhadad, 2009; Goldberg and Elhadad, 2010b; Goldberg, 2011) , but not in the context of joint models.", "cite_spans": [ { "start": 190, "end": 218, "text": "(Goldberg and Elhadad, 2009;", "ref_id": "BIBREF20" }, { "start": 219, "end": 247, "text": "Goldberg and Elhadad, 2010b;", "ref_id": "BIBREF22" }, { "start": 248, "end": 263, "text": "Goldberg, 2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Turkish dependency parsing was pioneered in Eryigit and Oflazer (2006) and Eryigit et al. (2008) . They compare parsing based on inflectional groups to word-based parsing and conclude that the former is more suitable for Turkish. \u00c7 etinoglu and Kuhn (2013) are first to discuss joint models for Turkish and present experiments for joint POS tagging and parsing, but use a pipeline to decide on segmentation and morphological features. To the best of our knowledge, there currently exists no work on full lattice parsing for Turkish.", "cite_spans": [ { "start": 44, "end": 70, "text": "Eryigit and Oflazer (2006)", "ref_id": "BIBREF15" }, { "start": 75, "end": 96, "text": "Eryigit et al. (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Morphologically rich languages pose many challenges to standard dependency parsing systems, one of them being that the number of tokens in the output is not always known beforehand. Solving this problem in a pipeline setup leads to efficient systems but systematically excludes interaction between the lexical, morphological, and syntactic level of analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In this work, we have presented a graph-based lattice dependency parser that operates on morphological lattices and simultaneously predicts a dependency tree and a path through the lattice. We tested the joint model on the Turkish treebank and the treebank of Modern Hebrew and demonstrated that the joint model outperforms three state-of-theart pipeline models. We presented the first results for full lattice parsing on the Turkish treebank. The results on the Hebrew treebank corroborate findings in constituency parsing (Cohen and Smith, 2007; Goldberg and Tsarfaty, 2008) .", "cite_spans": [ { "start": 524, "end": 547, "text": "(Cohen and Smith, 2007;", "ref_id": "BIBREF10" }, { "start": 548, "end": 576, "text": "Goldberg and Tsarfaty, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 359-373, 2015. Action Editor: Joakim Nivre.Submission batch: 4/2015; Published 6/2015. c 2015 Association for Computational Linguistics. Distributed under a CC-BY-NC-SA 4.0 license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is a technical definition of word and has no ambition to make claims about the linguistic definition of a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The lattice ensures that always only one of the bigrams with the same token in second position can be part of a path.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ark.cs.cmu.edu/AD3/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The corrected version is available on the second author's webpage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://code.google.com/p/mate-tools 7 http://www.ark.cs.cmu.edu/TurboParser/, version 2.0.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "incorrect parses gave us 5 different scores. The differences are caused by the transformation of dependency trees to constituency trees, because the constituency trees have different edit distances compared to the gold standard. Consequently, this means that some attachment errors of the dependency parser are punished more than others in an unpredictable way.11 A description on how these lattice are produced is given inSeddah et al. (2013, page 159)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank our anonymous reviewers for their helpful comments. We also thank Anders Bj\u00f6rkelund for many useful discussions. This work was funded by the Deutsche Forschungsgemeinschaft (DFG) via SFB 732, projects D2 and D8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fast and Robust Compressive Summarization with Dual Decomposition and Multi-Task Learning", "authors": [ { "first": "Miguel", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "196--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Almeida and Andre Martins. 2013. Fast and Robust Compressive Summarization with Dual De- composition and Multi-Task Learning. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 196-206, Sofia, Bulgaria, August. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A Joint Model for Quotation Attribution and Coreference Resolution", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Mariana", "suffix": "" }, { "first": "Miguel", "middle": [ "B" ], "last": "Almeida", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Almeida", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariana S. C. Almeida, Miguel B. Almeida, and Andr\u00e9 F. T. Martins. 2014. A Joint Model for Quotation At- tribution and Coreference Resolution. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 39- 48, Gothenburg, Sweden, April. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Transition-Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1455--1465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet and Joakim Nivre. 2012. A Transition- Based System for Joint Part-of-Speech Tagging and Labeled Non-Projective Dependency Parsing. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 1455- 1465, Jeju, South Korea. Association for Computa- tional Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Joint Morphological and Syntactic Analysis for Richly Inflected Languages", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Boguslavsky", "suffix": "" }, { "first": "Richrd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "415--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet, Joakim Nivre, Igor Boguslavsky, Richrd Farkas, Filip Ginter, and Jan Haji. 2013. Joint Mor- phological and Syntactic Analysis for Richly Inflected Languages. Transactions of the Association for Com- putational Linguistics, 1:415-428.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Efficient Parsing of Syntactic and Semantic Dependency Structures", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2009. Efficient Parsing of Syntactic and Semantic Dependency Structures. In Proceedings of the Thirteenth Conference on Computational Natu- ral Language Learning (CoNLL 2009): Shared Task, pages 67-72, Boulder, Colorado, June. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Very high accuracy and fast dependency parsing is not a contradiction", "authors": [ { "first": "Bernd", "middle": [], "last": "Bohnet", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "89--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernd Bohnet. 2010. Very high accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89-97, Beijing, China. International Committee on Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Experiments with a Higher-Order Projective Dependency Parser", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "957--961", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Carreras. 2007. Experiments with a Higher- Order Projective Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 957-961, Prague, Czech Republic, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Towards Joint Morphological Analysis and Dependency Parsing of Turkish", "authors": [ { "first": "\u00c7", "middle": [], "last": "Ozlem", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Etinoglu", "suffix": "" }, { "first": "", "middle": [], "last": "Kuhn", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second International Conference on Dependency Linguistics", "volume": "", "issue": "", "pages": "23--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozlem \u00c7 etinoglu and Jonas Kuhn. 2013. Towards Joint Morphological Analysis and Dependency Pars- ing of Turkish. In Proceedings of the Second In- ternational Conference on Dependency Linguistics (DepLing 2013), pages 23-32, Prague, Czech Repub- lic, August. Charles University in Prague, Matfyz- press, Prague, Czech Republic.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Turkish Treebank as a Gold Standard for Morphological Disambiguation and Its Influence on Parsing", "authors": [ { "first": ";", "middle": [], "last": "Ozlem \u00c7 Etinoglu", "suffix": "" }, { "first": "Khalid", "middle": [], "last": "Choukri", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Declerck", "suffix": "" }, { "first": "Hrafn", "middle": [], "last": "Loftsson", "suffix": "" }, { "first": "Bente", "middle": [], "last": "Maegaard", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Mariani", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ozlem \u00c7 etinoglu. 2014. Turkish Treebank as a Gold Standard for Morphological Disambiguation and Its Influence on Parsing. In Nicoletta Calzolari (Confer- ence Chair), Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asun- cion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland, may. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the Shortest Arborescence of a Directed Graph", "authors": [ { "first": "Yoeng-Jin", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Tseng-Hong", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1965, "venue": "Scientia Sinica", "volume": "14", "issue": "10", "pages": "1396--1400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the Shortest Arborescence of a Directed Graph. Scientia Sinica, 14(10):1396-1400.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Joint morphological and syntactic disambiguation", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "208--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen and Noah A. Smith. 2007. Joint morpho- logical and syntactic disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 208-217, Prague, Czech Republic. Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Online passive-aggressive algorithms", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 16th Annual Conference on Neural Information Processing Systems", "volume": "7", "issue": "", "pages": "1217--1224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. 2003. Online passive-aggressive algo- rithms. In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, volume 7, pages 1217-1224, Cambridge, Massachusetts, USA. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Exact Dual Decomposition Algorithm for Shallow Semantic Parsing with Constraints", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Martins", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "7--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das, Andr\u00e9 F. T. Martins, and Noah A. Smith. 2012. An Exact Dual Decomposition Algorithm for Shallow Semantic Parsing with Constraints. In *SEM 2012: The First Joint Conference on Lexical and Com- putational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 209-217, Montr\u00e9al, Canada, 7-8 June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Optimum Branchings. Journal of Research of the National Bureau of Standards", "authors": [], "year": 1967, "venue": "", "volume": "71", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Edmonds. 1967. Optimum Branchings. Jour- nal of Research of the National Bureau of Standards, 71B(4):233-240.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bilexical Grammars and a Cubic-Time Probabilistic Parser", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th International Workshop on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "54--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 1997. Bilexical Grammars and a Cubic- Time Probabilistic Parser. In Proceedings of the 5th International Workshop on Parsing Technologies (IWPT), pages 54-65, MIT, Cambridge, MA, sep.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Statistical dependency parsing of Turkish", "authors": [ { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcl\u015fen Eryigit and Kemal Oflazer. 2006. Statistical de- pendency parsing of Turkish. In Proceedings of the 11th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 89-96, Trento, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dependency Parsing of Turkish", "authors": [ { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "3", "pages": "357--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcl\u015fen Eryigit, Joakim Nivre, and Kemal Oflazer. 2008. Dependency Parsing of Turkish. Computational Lin- guistics, 34(3):357-389.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multiword Expressions in Statistical Dependency Parsing", "authors": [ { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Tugay", "middle": [], "last": "Ilbay", "suffix": "" }, { "first": "Ozan Arkan", "middle": [], "last": "Can", "suffix": "" } ], "year": 2011, "venue": "Proc. of the SPMRL Workshop of IWPT", "volume": "", "issue": "", "pages": "45--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcl\u015fen Eryigit, Tugay Ilbay, and Ozan Arkan Can. 2011. Multiword Expressions in Statistical Depen- dency Parsing. In Proc. of the SPMRL Workshop of IWPT, pages 45-55, Dublin, Ireland.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Impact of Automatic Morphological Analysis & Disambiguation on Dependency Parsing of Turkish", "authors": [ { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", "volume": "", "issue": "", "pages": "12--1056", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcl\u015fen Eryigit. 2012. The Impact of Automatic Morpho- logical Analysis & Disambiguation on Dependency Parsing of Turkish. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Ste- lios Piperidis, editors, Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC-2012), pages 1960-1965, Istanbul, Turkey, May. European Language Resources Associa- tion (ELRA). ACL Anthology Identifier: L12-1056.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Large margin classification using the perceptron algorithm. Machine Learning", "authors": [ { "first": "Yoav", "middle": [], "last": "Freund", "suffix": "" }, { "first": "Robert", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 1999, "venue": "", "volume": "37", "issue": "", "pages": "277--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Freund and Robert E. Schapire. 1999. Large mar- gin classification using the perceptron algorithm. Ma- chine Learning, 37(3):277-296.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hebrew Dependency Parsing: Initial Results", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)", "volume": "", "issue": "", "pages": "129--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2009. Hebrew Dependency Parsing: Initial Results. In Proceedings of the 11th International Conference on Parsing Tech- nologies (IWPT'09), pages 129-133, Paris, France, October. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "An Efficient Algorithm for Easy-First Non-Directional Dependency Parsing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "742--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2010a. An Ef- ficient Algorithm for Easy-First Non-Directional De- pendency Parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics, pages 742-750, Los Angeles, California, June. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Easy-First Dependency Parsing of Modern Hebrew", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", "volume": "", "issue": "", "pages": "103--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2010b. Easy-First Dependency Parsing of Modern Hebrew. In Proceed- ings of the NAACL HLT 2010 First Workshop on Sta- tistical Parsing of Morphologically-Rich Languages, pages 103-107, Los Angeles, CA, USA, June. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Word segmentation, unknown-word resolution, and morphological agreement in a hebrew parsing system", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "", "pages": "121--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2013. Word seg- mentation, unknown-word resolution, and morpholog- ical agreement in a hebrew parsing system. Computa- tional Linguistics, 39(1):121-160.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A single generative model for joint morphological segmentation and syntactic parsing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "371--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Reut Tsarfaty. 2008. A single gener- ative model for joint morphological segmentation and syntactic parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguis- tics, pages 371-379, Columbus, Ohio. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Automatic Syntactic Processing of Modern Hebrew", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2011. Automatic Syntactic Processing of Modern Hebrew. Ph.D. thesis, Ben Gurion University, Beer Sheva, Israel.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Better Arabic Parsing: Baselines, Evaluations, and Analysis", "authors": [ { "first": "Spence", "middle": [], "last": "Green", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "394--402", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spence Green and Christopher D. Manning. 2010. Bet- ter Arabic Parsing: Baselines, Evaluations, and Anal- ysis. In Proceedings of the 23rd International Con- ference on Computational Linguistics (Coling 2010), pages 394-402, Beijing, China, August. Coling 2010 Organizing Committee.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Incremental Joint POS Tagging and Dependency Parsing in Chinese", "authors": [ { "first": "Jun", "middle": [], "last": "Hatori", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1216--1224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2011. Incremental Joint POS Tag- ging and Dependency Parsing in Chinese. In Proceed- ings of 5th International Joint Conference on Natu- ral Language Processing, pages 1216-1224, Chiang Mai, Thailand, November. Asian Federation of Natu- ral Language Processing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Incremental Joint Approach to Word Segmentation, POS Tagging, and Dependency Parsing in Chinese", "authors": [ { "first": "Jun", "middle": [], "last": "Hatori", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1045--1053", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2012. Incremental Joint Approach to Word Segmentation, POS Tagging, and Dependency Parsing in Chinese. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1045- 1053, Jeju Island, Korea, July. Association for Com- putational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Dual Decomposition for Parsing with Non-Projective Head Automata", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1288--1298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual Decomposi- tion for Parsing with Non-Projective Head Automata. In Proceedings of the 2010 Conference on Empiri- cal Methods in Natural Language Processing, pages 1288-1298, Cambridge, MA, October. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Unified Dependency Parsing of Chinese Morphological and Syntactic Structures", "authors": [ { "first": "Zhongguo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1445--1454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhongguo Li and Guodong Zhou. 2012. Unified Depen- dency Parsing of Chinese Morphological and Syntactic Structures. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learn- ing, pages 1445-1454, Jeju Island, Korea, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Joint Models for Chinese POS Tagging and Dependency Parsing", "authors": [ { "first": "Zhenghua", "middle": [], "last": "Li", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenliang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1180--1191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, Wen- liang Chen, and Haizhou Li. 2011. Joint Models for Chinese POS Tagging and Dependency Parsing. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 1180-1191, Edinburgh, Scotland, UK, July. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Easy-First Chinese POS Tagging and Dependency Parsing", "authors": [ { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Feiliang", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "1731--1746", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ji Ma, Tong Xiao, Jingbo Zhu, and Feiliang Ren. 2012. Easy-First Chinese POS Tagging and Dependency Parsing. In Proceedings of COLING 2012, pages 1731-1746, Mumbai, India, December. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Concise Integer Linear Programming Formulations for Dependency Parsing", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "342--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Noah Smith, and Eric Xing. 2009. Con- cise Integer Linear Programming Formulations for De- pendency Parsing. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 342-350, Sun- tec, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Turbo Parsers: Dependency Parsing by Approximate Variational Inference", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Aguiar", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Figueiredo", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "34--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Noah Smith, Eric Xing, Pedro Aguiar, and Mario Figueiredo. 2010. Turbo Parsers: Depen- dency Parsing by Approximate Variational Inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 34- 44, Cambridge, MA, October. Association for Compu- tational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "An Augmented Lagrangian Approach to Constrained MAP Inference", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Figueiredo", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Aguiar", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML '11", "volume": "", "issue": "", "pages": "169--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Mario Figueiredo, Pedro Aguiar, Noah Smith, and Eric Xing. 2011a. An Augmented La- grangian Approach to Constrained MAP Inference. In Lise Getoor and Tobias Scheffer, editors, Proceed- ings of the 28th International Conference on Machine Learning (ICML-11), ICML '11, pages 169-176, New York, NY, USA, June. ACM.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Dual Decomposition with Many Overlapping Components", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Figueiredo", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Aguiar", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "238--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Noah Smith, Mario Figueiredo, and Pe- dro Aguiar. 2011b. Dual Decomposition with Many Overlapping Components. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 238-249, Edinburgh, Scot- land, UK., July. Association for Computational Lin- guistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Turning on the Turbo: Fast Third-Order Non-Projective Turbo Parsers", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the Turbo: Fast Third-Order Non- Projective Turbo Parsers. In Proceedings of the 51st", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "2", "issue": "", "pages": "617--622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 617-622, Sofia, Bulgaria, August. Association for Computa- tional Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "AD3: Alternating Directions Dual Decomposition for MAP Inference in Graphical Models", "authors": [ { "first": "F", "middle": [ "T" ], "last": "Andr\u00e9", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" }, { "first": "A", "middle": [ "T" ], "last": "M\u00e1rio", "suffix": "" }, { "first": "Pedro", "middle": [ "M Q" ], "last": "Figueiredo", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Aguiar", "suffix": "" }, { "first": "Eric", "middle": [ "P" ], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2015, "venue": "Journal of Machine Learning Research", "volume": "16", "issue": "", "pages": "495--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 F. T. Martins, M\u00e1rio A. T. Figueiredo, Pedro M. Q. Aguiar, Noah A. Smith, and Eric P. Xing. 2015. AD3: Alternating Directions Dual Decomposition for MAP Inference in Graphical Models. Journal of Machine Learning Research, 16:495-545.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Fernando Pereira. 2006. On- line learning of approximate dependency parsing al- gorithms. In Proceedings of the 11th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 81-88, Trento, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Non-Projective Dependency Parsing using Spanning Tree Algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "Ribarov", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of Human Language Technology Conference and Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 523-530, Vancouver, British Columbia, Canada, October. Association for Computational Lin- guistics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A general method applicable to the search for similarities in the amino acid sequence of two proteins", "authors": [ { "first": "B", "middle": [], "last": "Saul", "suffix": "" }, { "first": "Christian", "middle": [ "D" ], "last": "Needleman", "suffix": "" }, { "first": "", "middle": [], "last": "Wunsch", "suffix": "" } ], "year": 1970, "venue": "Journal of molecular biology", "volume": "48", "issue": "3", "pages": "443--453", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443-453.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Building and Exploiting Syntactically-annotated Corpora", "authors": [ { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Bilge", "middle": [], "last": "Say", "suffix": "" }, { "first": "G\u00f6khan", "middle": [], "last": "Dilek Zeynep Hakkani-T\u00fcr", "suffix": "" }, { "first": "", "middle": [], "last": "T\u00fcr", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kemal Oflazer, Bilge Say, Dilek Zeynep Hakkani-T\u00fcr, and G\u00f6khan T\u00fcr. 2003. Building a Turkish Tree- bank. In Anne Abeille, editor, Building and Exploiting Syntactically-annotated Corpora. Kluwer Academic Publishers, Dordrecht.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Two-level Description of Turkish Morphology", "authors": [ { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" } ], "year": 1994, "venue": "Literary and Linguistic Computing", "volume": "9", "issue": "2", "pages": "137--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kemal Oflazer. 1994. Two-level Description of Turk- ish Morphology. Literary and Linguistic Computing, 9(2):137-148.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing", "authors": [ { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M. Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On Dual Decomposition and Linear Programming Relaxations for Natural Lan- guage Processing. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 1-11, Cambridge, MA, October. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Turkish Language Resources: Morphological Parser, Morphological Disambiguator and Web Corpus", "authors": [ { "first": "Ha\u015fim", "middle": [], "last": "Sak", "suffix": "" }, { "first": "Tunga", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Sara\u00e7lar", "suffix": "" } ], "year": 2008, "venue": "Proc. of GoTAL", "volume": "", "issue": "", "pages": "417--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ha\u015fim Sak, Tunga G\u00fcng\u00f6r, and Murat Sara\u00e7lar. 2008. Turkish Language Resources: Morphological Parser, Morphological Disambiguator and Web Corpus. In Proc. of GoTAL 2008, pages 417-427.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages", "authors": [ { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Iakes", "middle": [], "last": "Goenaga", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Koldo Gojenola Galletebeitia", "suffix": "" }, { "first": "Spence", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Green", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Maier", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Przepi\u00f3rkowski", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Seeker", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Versley", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Woli\u0144ski", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wr\u00f3blewska", "suffix": "" }, { "first": "Clergerie", "middle": [], "last": "Villemonte De La", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", "volume": "", "issue": "", "pages": "146--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djam\u00e9 Seddah, Reut Tsarfaty, Sandra K\u00fcbler, Marie Can- dito, Jinho D. Choi, Rich\u00e1rd Farkas, Jennifer Fos- ter, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yan- nick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologi- cally Rich Languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically- Rich Languages, pages 146-182, Seattle, Washington, USA, October. Association for Computational Lin- guistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Introducing the SPMRL 2014 Shared Task on Parsing Morphologically-rich Languages", "authors": [ { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages", "volume": "", "issue": "", "pages": "103--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Djam\u00e9 Seddah, Sandra K\u00fcbler, and Reut Tsarfaty. 2014. Introducing the SPMRL 2014 Shared Task on Parsing Morphologically-rich Languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Mor- phologically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 103-109, Dublin, Ireland, August. Dublin City University.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Building a tree-bank of modern Hebrew text", "authors": [ { "first": "Khalil", "middle": [], "last": "Sima'an", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Itai", "suffix": "" }, { "first": "Yoad", "middle": [], "last": "Winter", "suffix": "" } ], "year": 2001, "venue": "Traitement Automatique des Langues", "volume": "42", "issue": "2", "pages": "247--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khalil Sima'an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a tree-bank of modern Hebrew text. Traitement Automatique des Langues, 42(2):247-380.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Learning Structured Prediction Models: A Large Margin Approach", "authors": [ { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "Vassil", "middle": [], "last": "Chatalbashev", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Koller", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 22th Annual International Conference on Machine Learning", "volume": "", "issue": "", "pages": "896--903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning Structured Predic- tion Models: A Large Margin Approach. In Proceed- ings of the 22th Annual International Conference on Machine Learning, pages 896-903, Bonn, Germany. ACM.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix Detection, Affix Labeling, POS Tagging, and Dependency Parsing", "authors": [ { "first": "Stephen", "middle": [], "last": "Tratz", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", "volume": "", "issue": "", "pages": "34--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Tratz. 2013. A Cross-Task Flexible Transi- tion Model for Arabic Tokenization, Affix Detection, Affix Labeling, POS Tagging, and Dependency Pars- ing. In Proceedings of the Fourth Workshop on Sta- tistical Parsing of Morphologically-Rich Languages, pages 34-45, Seattle, Washington, USA, October. As- sociation for Computational Linguistics.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Statistical Parsing of Morphologically Rich Languages (SPMRL) What, How and Whither", "authors": [ { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Kuebler", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Versley", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Ines", "middle": [], "last": "Rehbein", "suffix": "" }, { "first": "Lamia", "middle": [], "last": "Tounsi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reut Tsarfaty, Djam\u00e9 Seddah, Yoav Goldberg, Sandra Kuebler, Yannick Versley, Marie Candito, Jennifer Foster, Ines Rehbein, and Lamia Tounsi. 2010. Sta- tistical Parsing of Morphologically Rich Languages (SPMRL) What, How and Whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 1- 12, Los Angeles, CA, USA, June. Association for Computational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Joint Evaluation of Morphological Segmentation and Syntactic Parsing", "authors": [ { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Evelina", "middle": [], "last": "Andersson", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "6--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2012. Joint Evaluation of Morphological Segmen- tation and Syntactic Parsing. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 6-10, Jeju Island, Korea, July. Association for Com- putational Linguistics.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Integrated Morphological and Syntactic Disambiguation for Modern Hebrew", "authors": [ { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL 2006 Student Research Workshop", "volume": "", "issue": "", "pages": "49--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reut Tsarfaty. 2006. Integrated Morphological and Syn- tactic Disambiguation for Modern Hebrew. In Pro- ceedings of the COLING/ACL 2006 Student Research Workshop, pages 49-54, Sydney, Australia, July. As- sociation for Computational Linguistics.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Character-Level Chinese Dependency Parsing", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1326--1336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-Level Chinese Dependency Parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1326-1336, Baltimore, Maryland, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Randomized Greedy Inference for Joint Segmentation, POS Tagging and Dependency Parsing", "authors": [ { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chengtao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "42--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Zhang, Chengtao Li, Regina Barzilay, and Kareem Darwish. 2015. Randomized Greedy Inference for Joint Segmentation, POS Tagging and Dependency Parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 42-52, Denver, Colorado, May-June. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Dependency representation for ekmek ald\u0131m." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "A morphological lattice for ekmek ald\u0131m." }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "The lattice for hneim" }, "TABREF0": { "text": "89.52 90.10 89.45 86.84 86.30 86.57 83.46 JOINT MODEL 90.80 90.22 90.51 89.94 86.68 87.49 87.08 84.67 test BASELINE 89.46 88.51 88.99 87.95 81.79 79.83 80.80 74.85", "type_str": "table", "num": null, "html": null, "content": "
TurkishHebrew
data systemPREC RECF1ACCw PREC RECF1ACCw
devBASELINE89.59 88.14 88.86 87.9785.99 84.07 85.02 80.30
BIGRAM MODEL 90.69 BIGRAM MODEL 89.96 89.23 89.59 88.71 JOINT MODEL 90.19 89.74 89.97 89.2584.44 83.22 83.83 79.60 83.88 83.99 83.94 80.28
" }, "TABREF1": { "text": "Path selection quality.", "type_str": "table", "num": null, "html": null, "content": "
LABELEDUNLABELEDIGeval STRICTIGeval
datasystemPRECRECF1PRECRECF1UASIG LASIG UASIG LASIG
devMATE62.5461.7362.1469.4468.5468.9870.6060.1074.8863.46
TURBO PIPELINE JOINT63.54 63.86 64.21 63.79 * 64.00 62.71 63.12 63.03 63.4470.68 70.65 70.9669.76 69.73 70.50 * 70.73 72.66 * 70.22 72.22 70.19 72.2661.24 61.82 62.4076.58 76.64 76.6164.73 65.49 65.59
10cv MATE63.2862.4962.8870.3769.5069.9471.7561.2675.8464.42
TURBO63.8263.0363.4271.1270.2470.6872.7761.8976.9365.09
PIPELINE64.9764.1764.5771.7170.8371.2773.6663.5277.6866.82
JOINT65.27 64.84 \u2020 65.06 72.05 * 71.58 \u2020 71.8273.9363.8577.7466.83
testMATE64.6464.1264.3870.6270.0470.3371.9961.8477.0865.98
TURBO PIPELINE JOINT65.36 66.40 67.33 66.99 * 67.16 72.94 * 72.58 * 72.76 64.83 65.09 71.66 71.08 71.37 65.86 66.13 72.30 71.72 72.0173.16 74.33 75.0262.76 64.40 65.3278.37 79.61 79.4567.02 69.02 68.99
" }, "TABREF2": { "text": "Parsing results for Turkish. Statistically significant differences between the joint system and the pipeline system are marked with \u2020 (p < 0.01) and * (p < 0.05). Significance testing was performed using the Wilcoxon Signed Rank Test (not for F1).", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF3": { "text": "65.12 64.72 64.92 70.44 70.00 70.22 PIPELINE 65.64 65.23 65.44 70.65 70.21 70.43 JOINT10 66.82 67.44 \u2020 67.13 71.47 72.13 * 71.80 test MATE 63.16 62.25 62.70 67.52 66.55 67.03 TURBO 63.06 62.16 62.61 67.27 66.31 66.79 PIPELINE 63.63 62.72 63.17 67.62 66.65 67.14 JOINT10 63.81 63.89 \u2020 63.85 67.79 67.88 \u2020 67.84", "type_str": "table", "num": null, "html": null, "content": "
LABELEDUNLABELED
data systemPREC REC F1 PREC REC F1
dev MATE65.41 65.00 65.20 70.65 70.21 70.43
" }, "TABREF4": { "text": "67.62 67.83 74.70 74.24 74.47 TURBO 67.97 67.54 67.75 74.58 74.12 74.35 PIPELINE 68.56 68.14 68.35 74.84 74.37 74.60 JOINT10 69.23 69.87 \u2020 69.55 74.88 75.58 \u2020 75.23 test MATE 66.17 65.22 65.69 71.62 70.60 71.11 TURBO 66.14 65.19 65.66 71.38 70.35 70.86 PIPELINE 66.81 65.85 66.33 71.82 70.79 71.30 JOINT10 66.63 66.72 \u2020 66.68 71.48 71.57 \u2020 71.52", "type_str": "table", "num": null, "html": null, "content": "
LABELEDUNLABELED
data systemPREC REC F1 PREC REC F1
dev MATE68.05
, but our model
" }, "TABREF5": { "text": "Parsing results for Hebrew, evaluated without morphology. Statistically significant differences between the joint system and the pipeline system are marked with \u2020. Significance testing was performed using the Wilcoxon Signed Rank Test with p < 0.01 (not for F1).", "type_str": "table", "num": null, "html": null, "content": "" } } } }