{ "paper_id": "P13-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:34:18.201974Z" }, "title": "Stop-probability estimates computed on a large corpus improve Unsupervised Dependency Parsing", "authors": [ { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "postCode": "11800", "settlement": "Prague, Prague", "country": "Czech Republic" } }, "email": "marecek@ufal.mff.cuni.cz" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "postCode": "11800", "settlement": "Prague, Prague", "country": "Czech Republic" } }, "email": "straka@ufal.mff.cuni.cz" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Even though the quality of unsupervised dependency parsers grows, they often fail in recognition of very basic dependencies. In this paper, we exploit a prior knowledge of STOP-probabilities (whether a given word has any children in a given direction), which is obtained from a large raw corpus using the reducibility principle. By incorporating this knowledge into Dependency Model with Valence, we managed to considerably outperform the state-of-theart results in terms of average attachment score over 20 treebanks from CoNLL 2006 and 2007 shared tasks.", "pdf_parse": { "paper_id": "P13-1028", "_pdf_hash": "", "abstract": [ { "text": "Even though the quality of unsupervised dependency parsers grows, they often fail in recognition of very basic dependencies. In this paper, we exploit a prior knowledge of STOP-probabilities (whether a given word has any children in a given direction), which is obtained from a large raw corpus using the reducibility principle. By incorporating this knowledge into Dependency Model with Valence, we managed to considerably outperform the state-of-theart results in terms of average attachment score over 20 treebanks from CoNLL 2006 and 2007 shared tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of unsupervised dependency parsing (which strongly relates to the grammar induction task) has become popular in the last decade, and its quality has been greatly increasing during this period.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first implementation of Dependency Model with Valence (DMV) (Klein and Manning, 2004) with a simple inside-outside inference algorithm (Baker, 1979) achieved 36% attachment score on English and was the first system outperforming the adjacent-word baseline. 1 Current attachment scores of state-of-the-art unsupervised parsers are higher than 50% for many languages (Spitkovsky et al., 2012; Blunsom and Cohn, 2010) . This is still far below the supervised approaches, but their indisputable advantage is the fact that no annotated treebanks are needed and the induced structures are not burdened by any linguistic conventions. Moreover, supervised parsers always only simulate the treebanks they were trained on, whereas unsupervised parsers have an ability to be fitted to different particular applications.", "cite_spans": [ { "start": 64, "end": 89, "text": "(Klein and Manning, 2004)", "ref_id": "BIBREF14" }, { "start": 139, "end": 152, "text": "(Baker, 1979)", "ref_id": "BIBREF0" }, { "start": 261, "end": 262, "text": "1", "ref_id": null }, { "start": 369, "end": 394, "text": "(Spitkovsky et al., 2012;", "ref_id": "BIBREF29" }, { "start": 395, "end": 418, "text": "Blunsom and Cohn, 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some of the current approaches are based on the DMV, a generative model where the grammar is expressed by two probability distributions: P choose (c d |c h , dir ), which generates a new child c d attached to the head c h in the direction dir (left or right), and P stop (STOP |c h , dir , \u2022 \u2022 \u2022 ), which makes a decision whether to generate another child of c h in the direction dir or not. 2 Such a grammar is then inferred using sampling or variational methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Unfortunately, there are still cases where the inferred grammar is very different from the grammar we would expect, e.g. verbs become leaves instead of governing the sentences. Rasooli and Faili (2012) and Bisk and Hockenmaier (2012) made some efforts to boost the verbocentricity of the inferred structures; however, both of the approaches require manual identification of the POS tags marking the verbs, which renders them useless when unsupervised POS tags are employed.", "cite_spans": [ { "start": 177, "end": 201, "text": "Rasooli and Faili (2012)", "ref_id": "BIBREF23" }, { "start": 206, "end": 233, "text": "Bisk and Hockenmaier (2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contribution of this paper is a considerable improvement of unsupervised parsing quality by estimating the P stop probabilities externally using a very large corpus, and employing this prior knowledge in the standard inference of DMV. The estimation is done using the reducibility principle introduced in (Mare\u010dek and\u017dabokrtsk\u00fd, 2012) . The reducibility principle postulates that if a word (or a sequence of words) can be removed from a sentence without violating its grammatical correctness, it is a leaf (or a subtree) in its dependency structure. For the purposes of this paper, we assume the following hypothesis:", "cite_spans": [ { "start": 314, "end": 343, "text": "(Mare\u010dek and\u017dabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If a sequence of words can be removed from a sentence without violating its grammatical correctness, no word outside the sequence depends on any word in the sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our hypothesis is a generalization of the original hypothesis since it allows a reducible sequence to form several adjacent subtrees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let's outline the connection between the P stop probabilities and the property of reducibility. Figure 1 shows an example of a dependency tree. Sequences of reducible words are marked by thick lines below the sentence. Consider for example the word \"further\". It can be removed and thus, according to our hypothesis, no other word depends on it. Therefore, we can deduce that the P stop probability for such word is high both for the left and for the right direction. The phrase \"for further discussions\" is reducible as well and we can deduce that the P stop of its first word (\"for\") in the left direction is high since it cannot have any left children. We do not know anything about its right children, because they can be located within the sequence (and there is really one in Figure 1 ). Similarly, the word \"discussions\", which is the last word in this sequence, cannot have any right children and we can estimate that its right P stop probability is high. On the other hand, non-reducible words such, as the verb \"asked\" in our example, can have children, and therefore their P stop can be estimated as low for both directions.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 782, "end": 790, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most difficult task in this approach is to automatically recognize reducible sequences. This problem, together with the estimation of the stopprobabilities, is described in Section 3. Our model, not much different from the classic DMV, is introduced in Section 4. Section 5 describes the inference algorithm based on Gibbs sampling. Experiments and results are discussed in Section 6. Section 7 concludes the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Reducibility: The notion of reducibility belongs to the traditional linguistic criteria for recogniz-ing dependency relations. As mentioned e.g. by K\u00fcbler et al. (2009) , the head h of a construction c determines the syntactic category of c and can often replace c. In other words, the descendants of h can be often removed without making the sentence incorrect. Similarly, in the Dependency Analysis by Reduction (Lopatkov\u00e1 et al., 2005) , the authors assume that stepwise deletions of dependent elements within a sentence preserve its syntactic correctness. A similar idea of dependency analysis by splitting a sentence into all possible acceptable fragments is used by Gerdes and Kahane (2011) .", "cite_spans": [ { "start": 148, "end": 168, "text": "K\u00fcbler et al. (2009)", "ref_id": "BIBREF16" }, { "start": 414, "end": 438, "text": "(Lopatkov\u00e1 et al., 2005)", "ref_id": "BIBREF17" }, { "start": 672, "end": 696, "text": "Gerdes and Kahane (2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We have directly utilized the aforementioned criteria for dependency relations in unsupervised dependency parsing in our previous paper (Mare\u010dek and\u017dabokrtsk\u00fd, 2012) . Our dependency model contained a submodel which directly prioritized subtrees that form reducible sequences of POS tags. Reducibility scores of given POS tag sequences were estimated using a large corpus of Wikipedia articles. The weakness of this approach was the fact that longer sequences of POS tags are very sparse and no reducibility scores could be estimated for them. In this paper, we avoid this shortcoming by estimating the STOP probabilities for individual POS tags only.", "cite_spans": [ { "start": 136, "end": 165, "text": "(Mare\u010dek and\u017dabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another task related to reducibility is sentence compression (Knight and Marcu, 2002; Cohn and Lapata, 2008) , which was used for text summarization. The task is to shorten the sentences while retaining the most important pieces of information, using the knowledge of the grammar. Conversely, our task is to induce the grammar using the sentences and their shortened versions.", "cite_spans": [ { "start": 61, "end": 85, "text": "(Knight and Marcu, 2002;", "ref_id": "BIBREF15" }, { "start": 86, "end": 108, "text": "Cohn and Lapata, 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Dependency Model with Valence (DMV) has been the most popular approach to unsupervised dependency parsing in the recent years. It was introduced by Klein and Manning (2004) and further improved by Smith (2007) and Cohen et al. (2008) . Headden III et al. (2009) introduce the Extended Valence Grammar and add lexicalization and smoothing. Blunsom and Cohn (2010) use tree substitution grammars, which allow learning of larger dependency fragments by employing the Pitman-Yor process. Spitkovsky et al. (2010) improve the inference using iterated learning of increasingly longer sentences. Further improvements were achieved by better dealing with punctuation (Spitkovsky et al., 2011b) and new \"boundary\" models (Spitkovsky et al., 2012) .", "cite_spans": [ { "start": 148, "end": 172, "text": "Klein and Manning (2004)", "ref_id": "BIBREF14" }, { "start": 197, "end": 209, "text": "Smith (2007)", "ref_id": "BIBREF24" }, { "start": 214, "end": 233, "text": "Cohen et al. (2008)", "ref_id": "BIBREF7" }, { "start": 236, "end": 261, "text": "Headden III et al. (2009)", "ref_id": "BIBREF13" }, { "start": 339, "end": 362, "text": "Blunsom and Cohn (2010)", "ref_id": "BIBREF2" }, { "start": 712, "end": 737, "text": "(Spitkovsky et al., 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Other approaches to unsupervised dependency parsing were described e.g. in (S\u00f8gaard, 2011) , (Cohen et al., 2011) , and (Bisk and Hockenmaier, 2012) . There also exist \"less unsupervised\" approaches that utilize an external knowledge of the POS tagset. For example, Rasooli and Faili (2012) identify the last verb in the sentence, minimize its probability of reduction and thus push it to the root position. Naseem et al. (2010) make use of manually-specified universal dependency rules such as Verb\u2192Noun, Noun\u2192Adjective. McDonald et al. (2011) identify the POS tags by a crosslingual transfer. Such approaches achieve better results; however, they are useless for grammar induction from plain text.", "cite_spans": [ { "start": 75, "end": 90, "text": "(S\u00f8gaard, 2011)", "ref_id": "BIBREF25" }, { "start": 93, "end": 113, "text": "(Cohen et al., 2011)", "ref_id": "BIBREF8" }, { "start": 120, "end": 148, "text": "(Bisk and Hockenmaier, 2012)", "ref_id": "BIBREF1" }, { "start": 408, "end": 428, "text": "Naseem et al. (2010)", "ref_id": "BIBREF21" }, { "start": 522, "end": 544, "text": "McDonald et al. (2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 STOP-probability estimation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We introduced a simple procedure for recognition of reducible sequences in (Mare\u010dek an\u010f Zabokrtsk\u00fd, 2012) : The particular sequence of words is removed from the sentence and if the remainder of the sentence exists elsewhere in the corpus, the sequence is considered reducible. We provide an example in Figure 2 . The bigram \"this weekend\" in the sentence \"The next competition is this weekend at Lillehammer in Norway.\" is reducible since the same sentence without this bigram, i.e., \"The next competition is at Lillehammer in Norway.\", is in the corpus as well. Similarly, the prepositional phrase \"of Switzerland\" is also reducible.", "cite_spans": [ { "start": 75, "end": 105, "text": "(Mare\u010dek an\u010f Zabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 302, "end": 310, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Recognition of reducible sequences", "sec_num": "3.1" }, { "text": "It is apparent that only very few reducible sequences can be found by this procedure. If we use a corpus containing about 10,000 sentences, it is possible that we found no reducible sequences at all. However, we managed to find a sufficient amount of reducible sequences in corpora containing millions of sentences, see Section 6.1 and Table 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognition of reducible sequences", "sec_num": "3.1" }, { "text": "Recall our hypothesis from Section 1: If a sequence of words is reducible, no word outside the sequence can depend on any word in the sequence. Or, in terms of dependency structure: A reducible sequence consists of one or more adjacent subtrees. This means that the first word of a reducible sequence does not have any left children and, similarly, the last word in a reducible sequence does", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "Martin Fourcade was sixth , maintaining his lead at the top of the overall World Cup standings , although Svendsen is now only 59 points away from the Frenchman in second . The next competition is this weekend at Lillehammer in Norway .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "Larinto saw off allcomers at Kuopio with jumps of 129.5 and 124m for a total 240.9 points , just 0.1 points ahead of compatriot Matti Hautamaeki , who landed efforts of 127 and 129.5m . Third place went to Simon Ammann . Andreas Kofler , who won at the weekend at Kuusamo , was fourth but stays top of the season standings with 150 points .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "Third place went to Simon Ammann of Switzerland . Ammann is currently just fifth , overall with 120 points . The next competition is at Lillehammer in Norway . not have any right children. We make use of this property directly for estimating P stop probabilities. Hereinafter, P est stop (c h , dir ) denotes the STOPprobability we want to estimate from a large corpus; c h is the head's POS tag and dir is the direction in which the STOP probability is estimated. If c h is very often in the first position of reducible sequences, P est stop (c h , left) will be high. Similarly, if c h is often in the last position of reducible sequences, P est stop (c h , right) will be high. For each POS tag c h in the given corpus, we first compute its left and right \"raw\" score S stop (c h , left) and S stop (c h , right) as the relative number of times a word with POS tag c h was in the first (or last) position in a reducible sequence found in the corpus. We do not deal with sequences longer than a trigram since they are highly biased.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "S stop (c h , left) = # red.seq. [c h , . . . ] + \u03bb # c h in the corpus S stop (c h , right) = # red.seq. [. . . , c h ] + \u03bb # c h in the corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "Note that the S stop scores are not probabilities. Their main purpose is to sort the POS tags according to their \"reducibility\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "It may happen that for many POS tags there are no reducible sequences found. To avoid zero scores, we use a simple smoothing by adding \u03bb to each count:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "\u03bb = # all reducible sequences W ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "where W denotes the number of words in the given corpus. Such smoothing ensures that more frequent irreducible POS tags get a lower S stop score than the less frequent ones. Since reducible sequences found are very sparse, the values of S stop (c h , dir ) scores are very small. To convert them to estimated probabilities P est stop (c h , dir ), we need a smoothing that fulfills the following properties:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "(1) P est stop is a probability and therefore its value must be between 0 and 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "(2) The number of no-stop decisions (no matter in which direction) equals to W (number of words) since such decision is made before each word is generated. The number of stop decisions is 2W since they come after generating the last children in both the directions. Therefore, the average P est stop (h, dir ) over all words in the treebank should be 2/3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "After some experimenting, we chose the following normalization formula", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "P est stop (c h , dir ) = S stop (c h , dir ) S stop (c h , dir ) + \u03bd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "with a normalization constant \u03bd. The condition (1) is fulfilled for any positive value of \u03bd. Its exact value is set in accordance with the requirement (2) so that the average value of P est stop is 2/3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "dir \u2208{l,r} c\u2208C count(c)P est stop (c, dir ) = 2 3 \u2022 2W,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "where count(c) is the number of words with POS tag c in the corpus. We find the unique value of \u03bd that fulfills the previous equation numerically using a binary search algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing the STOP-probability estimations", "sec_num": "3.2" }, { "text": "We use the standard generative Dependency Model with Valence (Klein and Manning, 2004) . The generative story is the following: First, the head of the sentence is generated. Then, for each head, all its left children are generated, then the left STOP, then all its right children, and then the right STOP. When a child is generated, the algorithm immediately recurses to generate its subtree. When deciding whether to generate another child in the direction dir or the STOP symbol, we use the", "cite_spans": [ { "start": 61, "end": 86, "text": "(Klein and Manning, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P dmv stop (STOP |c h , dir , adj , c f ) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The new child c d in the direction dir is generated according to the P choose (c d |c h , dir ) model. The probability of the whole dependency tree T is the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P tree (T ) = P choose (head (T )|ROOT , right) \u2022 P tree (D(head (T ))) P tree (D(c h )) = dir \u2208{l,r} c d \u2208 deps(dir ,h) P dmv stop (\u00acSTOP |c h , dir , adj , c f ) P choose (c d |c h , dir )P tree (D(c d )) P dmv stop (STOP |c h , dir , adj , c f ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "where P tree (D(c h )) is probability of the subtree governed by h in the tree T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The set of features on which the P dmv stop and P choose probabilities are conditioned varies among the previous works. Our P dmv stop depends on the head POS tag c h , direction dir , adjacency adj , and fringe POS tag c f (described below). The use of adjacency is standard in DMV and enables us to have different P dmv stop for situations when no child was generated so far (adj = 1). That is, P dmv stop (c h , dir , adj = 1, c f ) decides whether the word c h has any children in the direction dir at all, whereas P dmv stop (h, dir , adj = 0, c f ) decides whether another child will be generated next to the already generated one. This distinction is of crucial importance for us: although we know how to estimate the STOP probabilities for adj = 1 from large data, we do not know anything about the STOP probabilities for adj = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The last factor c f , called fringe, is the POS tag of the previously generated sibling in the current direction dir . If there is no such sibling (in case adj = 1), the head c h is used as the fringe c f . This is a relatively novel idea in DMV, introduced by Spitkovsky et al. (2012). We decided to use the fringe word in our model since it gives slightly better results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "We assume that the distributions of P choose and P dmv stop are good if the majority of the probability mass is concentrated on few factors; therefore, we apply a Chinese Restaurant process (CRP) on them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The probability of generating a new child node c d attached to c h in the direction dir given the history (all the nodes we have generated so far) is computed using the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P choose (c d |c h , dir ) = = \u03b1 c 1 |C| + count \u2212 (c d , c h , dir ) \u03b1 c + count \u2212 (c h , dir ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "count \u2212 (c d , c h , dir )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "denotes the number of times a child node c d has been attached to c h in the direction dir in the history. Similarly, count \u2212 (c h , dir ) is the number of times something has been attached to c h in the direction dir . The \u03b1 c is a hyperparameter and |C| is the number of distinct POS tags in the corpus. 3 The STOP probability is computed in a similar way:", "cite_spans": [ { "start": 306, "end": 307, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P dmv stop (STOP |c h , dir , adj , c f ) = = \u03b1 s 2 3 + count \u2212 (STOP , c h , dir , adj , c f ) \u03b1 s + count \u2212 (c h , dir , adj , c f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "count \u2212 (STOP , c h , dir , adj , c f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "is the number of times a head c h had the last child c f in the direction dir in the history. The contribution of this paper is the inclusion of the stop-probability estimates into the DMV. Therefore, we introduce a new model P dmv +est stop , in which the probability based on the previously generated data is linearly combined with the probability estimates based on large corpora (Section 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P dmv +est stop (STOP |c h , dir , 1, c f ) = = (1 \u2212 \u03b2) \u2022 \u03b1 s 2 3 + count \u2212 (STOP , c h , dir , 1, c f ) \u03b1 s + count \u2212 (c h , dir , 1, c f ) + \u03b2 \u2022 P est stop (c h , dir ) P dmv +est stop (STOP |c h , dir , 0, c f ) = = P dmv stop (STOP |c h , dir , 0, c f )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "The hyperparameter \u03b2 defines the ratio between the CRP-based and estimation-based probability. The definition of the P dmv +est stop for adj = 0 equals the basic P dmv stop since we are able to estimate only the probability whether a particular head POS tag c h can or cannot have children in a particular direction, i.e if adj = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "3 The number of classes |C| is often used in the denominator. We decided to put its reverse value into the numerator since we observed such model to perform better for a constant value of \u03b1c over different languages and tagsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "Finally, we obtain the probability of the whole generated treebank as a product over the trees:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "P treebank = T \u2208treebank P tree (T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "An important property of the CRP is the fact that the factors are exchangeable -i.e. no matter how the trees are ordered in the treebank, the P treebank is always the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4" }, { "text": "We employ the Gibbs sampling algorithm (Gilks et al., 1996) . Unlike in (Mare\u010dek and\u017dabokrtsk\u00fd, 2012) , where edges were sampled individually, we sample whole trees from all possibilities on a given sentence using dynamic programming. The algorithm works as follows:", "cite_spans": [ { "start": 39, "end": 59, "text": "(Gilks et al., 1996)", "ref_id": "BIBREF11" }, { "start": 72, "end": 101, "text": "(Mare\u010dek and\u017dabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5" }, { "text": "1. A random projective dependency tree is assigned to each sentence in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5" }, { "text": "We go through the sentences in a random order. For each sentence, we sample a new dependency tree based on all other trees that are currently in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling:", "sec_num": "2." }, { "text": "Step 2 is repeated in many iterations. In this work, the number of iterations was set to 1000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "4. After the burn-in period (which was set to the first 500 iterations), we start collecting counts of edges between particular words that appeared during the sampling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Parsing: Based on the collected counts, we compute the final dependency trees using the Chu-Liu/Edmonds' algorithm (1965) for finding maximum directed spanning trees.", "cite_spans": [ { "start": 88, "end": 121, "text": "Chu-Liu/Edmonds' algorithm (1965)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "Our goal is to sample a new projective dependency tree T with probability proportional to P tree (T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "Since the factors are exchangeable, we can deal with any tree as if it was the last one in the corpus. We use dynamic programming to sample a tree with N nodes in O(N 4 ) time. Nevertheless, we sample trees using a modified probability P tree (T ). In P tree (T ), the probability of an edge depends on counts of all other edges, including the edges in the same tree. We instead use P tree (T ), where the counts are computed using only the other trees in the corpus, i.e., probabilities of edges of T are independent. There is a standard way to sample using the real P tree (T ) -we can use P tree (T ) as a proposal distribution in the Metropolis-Hastings algorithm (Hastings, 1970) , which then produces trees with probabilities proportional to P tree (T ) using acceptance-rejection scheme. We do not take this approach and we sample proportionally to P tree (T ) only, because we believe that for large enough corpora, the two distributions are nearly identical.", "cite_spans": [ { "start": 668, "end": 684, "text": "(Hastings, 1970)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "To sample a tree containing words w 1 , . . . , w N with probability proportional to P tree (T ), we first compute three tables:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "\u2022 t i (g, i, j) for g < i or g > j is the sum of probabilities of any tree on words w i , . . . , w j whose root is a child of w g , but not an outermost child in its direction; \u2022 t o (g, i, j) is the same, but the tree is the outermost child of w g ; \u2022 f o (g, i, j) for g < i or g > j is the sum of probabilities of any forest on words w i , . . . , w j , such that all the trees are children of w g and are the outermost children of w g in their direction. All the probabilities are computed using the P tree . If we compute the tables inductively from the smallest trees to the largest trees, we can precompute all the O(N 3 ) values in O(N 4 ) time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "Using these tables, we sample the tree recursively, starting from the root. At first, we sample the root r proportionally to the probability of a tree with the root r, which is a product of the probability of left children of r and right children of r. The probability of left children of r is either P stop (STOP |r, left) if r has no children, or P stop (\u00acSTOP |r, left)f o (r, 1, r \u2212 1) otherwise; the probability of right children is analogous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "After sampling the root, we sample the ranges of its left children, if any. We sample the first left child range l 1 proportionally either to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "t o (r, 1, r\u22121) if l 1 = 1, or to t i (r, l 1 , r \u2212 1)f o (r, 1, l 1 \u2212 1) if l 1 > 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "Then we sample the second left child range l 2 proportionally either to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "t o (r, 1, l 1 \u2212 1) if l 2 = 1, or to t i (r, l 2 , l 1 \u2212 1)f o (r, 1, l 2 \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "if l 2 > 1, and so on, while there are any left children. The right children ranges are sampled similarly. Finally, we recursively sample the children, i.e., their roots, their children and so on. It is simple to verify using the definition of P tree that the described method indeed samples trees proportionally to P tree .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sampling", "sec_num": "5.1" }, { "text": "Beginning the 500th iteration, we start collecting counts of individual dependency edges during the remaining iterations. After each iteration is finished (all the trees in the corpus are re-sampled), we increment the counter of all directed pairs of nodes which are connected by a dependency edge in the current trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "5.2" }, { "text": "After the last iteration, we use these collected counts as weights and compute maximum directed spanning trees using the Chu-Liu/Edmonds' algorithm (Chu and Liu, 1965) . Therefore, the resulting trees consist of edges maximizing the sum of individual counts:", "cite_spans": [ { "start": 148, "end": 167, "text": "(Chu and Liu, 1965)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "5.2" }, { "text": "T M ST = arg max T e\u2208T count(e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "5.2" }, { "text": "It is important to note that the MST algorithm may produce non-projective trees. Even if we average the strictly projective dependency trees, some non-projective edges may appear in the result. This might be an advantage since correct non-projective edges can be predicted; however, this relaxation may introduce mistakes as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing", "sec_num": "5.2" }, { "text": "We use two types of resources in our experiments. The first type are CoNLL treebanks from the year 2006 (Buchholz and Marsi, 2006) and 2007 (Nivre et al., 2007) , which we use for inference and for evaluation. As is the standard practice in unsupervised parsing evaluation, we removed all punctuation marks from the trees. In case a punctuation node was not a leaf, its children are attached to the parent of the removed node. For estimating the STOP probabilities (Section 3), we use the Wikipedia articles from W2C corpus (Majli\u0161 and\u017dabokrtsk\u00fd, 2012) , which provide sufficient amount of data for our purposes. Statistics across languages are shown in Table 1 .", "cite_spans": [ { "start": 104, "end": 130, "text": "(Buchholz and Marsi, 2006)", "ref_id": "BIBREF4" }, { "start": 140, "end": 160, "text": "(Nivre et al., 2007)", "ref_id": "BIBREF22" }, { "start": 524, "end": 552, "text": "(Majli\u0161 and\u017dabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 654, "end": 661, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "The Wikipedia texts were automatically tokenized and segmented to sentences so that their tokenization was similar to the one in the CoNLL evaluation treebanks. Unfortunately, we were not able to find any segmenter for Chinese that would produce a desired segmentation; therefore, we removed Chinese from evaluation. The next step was to provide the Wikipedia texts with POS tags. We employed the TnT tagger (Brants, 2000) which was trained on the re- spective CoNLL training data. The quality of such tagging is not very high since we do not use any lexicons or pretrained models. However, it is sufficient for obtaining usable stop probability estimates.", "cite_spans": [ { "start": 408, "end": 422, "text": "(Brants, 2000)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "6.1" }, { "text": "We applied the algorithm described in Section 3 on the prepared Wikipedia corpora and obtained the stop-probabilities P est stop in both directions for all the languages and their POS tags. To evaluate the quality of our estimations, we compare them with P tb stop , the stop probabilities computed directly on the evaluation treebanks. The comparisons on five selected languages are shown in Figure 3 . The individual points represent the individual POS tags, their size (area) shows their frequency in the particular treebank. The y-axis shows the stop probabilities estimated on Wikipedia by our algorithm, while the x-axis shows the stop probabilities computed on the evaluation CoNLL data. Ideally, the computed and estimated stop probabilities should be the same, i.e. all the points should be on the diagonal.", "cite_spans": [], "ref_spans": [ { "start": 393, "end": 401, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Estimated STOP probabilities", "sec_num": "6.2" }, { "text": "Let's focus on the graphs for English. Our method correctly recognizes that adverbs RB and adjectives JJ are often leaves (their stop probabilities in both directions are very high). Moreover, the estimates for RB are even higher than JJ, which will contribute to attaching adverbs to adjectives and not reversely. Nouns (NN, NNS) are somewhere in the middle, the stop probabilities for proper nouns (NNP) are estimated higher, which is correct since they have much less modifiers then the common nouns NN. The determiners are more problematic. Their estimated stop probability is not very high (about 0.65), while in the real treebank they are almost always leaves. This is caused by the fact that determiners are often obligatory in English and cannot be simply removed as, e.g., adjectives. The stop probabilities of prepositions (IN) are also very well recognized. While their left-stop is very probable (prepositions always start prepositional phrases), their right-stop probability is very low. The verbs (the most frequent verbal tag is VBD) have very low both right and left-stop probabilities. Our estimation assigns them the stop probability about 0.3 in both directions. This is quite high, but still, it is one of the lowest among other more frequent tags, and thus verbs tend to be the roots of the dependency trees. We could make similar analyses for other languages, but due to space reasons we only provide graphs for Czech, German, Spanish, and Hungarian in Figure 3 .", "cite_spans": [ { "start": 321, "end": 330, "text": "(NN, NNS)", "ref_id": null } ], "ref_spans": [ { "start": 1475, "end": 1483, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Estimated STOP probabilities", "sec_num": "6.2" }, { "text": "After a manual tuning, we have set our hyperparameters to the following values:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "6.3" }, { "text": "\u03b1 c = 50, \u03b1 s = 1, \u03b2 = 1/3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "6.3" }, { "text": "We have also found that the Gibbs sampler does not always converge to a similar grammar. For a couple of languages, the individual runs end up with very different trees. To prevent such differences, we run each inference 50 times and take the run with the highest final P treebank (see Section 4) for the evaluation. Table 2 shows the results of our unsupervised parser and compares them with results previously reported in other works. In order to see the impact of using the estimated stop probabilities (using model P dmv +est stop ), we provide results for classical DMV (using model P dmv stop ) as well. We do not provide results for Chinese since we do not have any appropriate tokenizer at our disposal (see Section 3), and also for Turkish from CoNLL 2006 since the data is not available to us.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 324, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Settings", "sec_num": "6.3" }, { "text": "We now focus on the third and fourth column of Table 2 . The addition of estimated stop probabilities based on large corpora improves the parsing accuracy on 15 out of 20 treebanks. In many cases, the improvement is substantial, which means that the estimated stop probabilities forced the model to completely rebuild the structures. For example, in Bulgarian, if the P dmv stop model is used, all the prepositions are leaves and the verbs seldom govern sentences. If the P dmv +est stop model is used, prepositions correctly govern nouns and verbs move to roots. We observe similar changes on Swedish as well. Unfortunately, there are also negative examples, such as Hungarian, where the addition of the estimated stop probabilities decreases the attachment score from 60.1% to 34%. This is probably caused by not very good estimates of the right-stop probability (see the last graph in Figure 3 ). Nevertheless, the estimated stop probabilities increase the average score over all the treebanks by more than 12% and therefore prove its usefulness.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 54, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 888, "end": 896, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "6.4" }, { "text": "In the last two columns of Table 2 , we provide results of two other works reported in the last year. The first one (spi12) is the DMV-based grammar inducer by Spitkovsky et al. 2012, 4 the second one (mar12) is our previous work (Mare\u010dek an\u010f Zabokrtsk\u00fd, 2012) . Comparing with (Spitkovsky et al., 2012), our parser reached better accuracy on 12 out of 20 treebanks. Although this might not seem as a big improvement, if we compare the average scores over the treebanks, our system significantly wins by more than 6%. The second system (mar12) outperforms our parser only on one treebank (on Italian by less than 3%) and its average score over all the treebanks is only 40%, i.e., more than 8% lower than the average score of our parser.", "cite_spans": [ { "start": 230, "end": 260, "text": "(Mare\u010dek an\u010f Zabokrtsk\u00fd, 2012)", "ref_id": null } ], "ref_spans": [ { "start": 27, "end": 34, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "6.4" }, { "text": "To see the theoretical upper bound of our model performance, we replaced the P est stop estimates by the P tb stop estimates computed from the evaluation treebanks and run the same inference algorithm with the same setting. The average attachment score of such reference DMV is almost 65%. This shows a huge space in which the estimation of STOP probabilities could be further improved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.4" }, { "text": "In this work, we studied the possibility of estimating the DMV stop-probabilities from a large raw corpus. We proved that such prior knowledge about stop-probabilities incorporated into the standard DMV model significantly improves the unsupervised dependency parsing and, since we are not aware of any other fully unsupervised dependency parser with higher average attachment score over CoNLL data, we state that we reached a new stateof-the-art result. 5 , which incorporates STOP estimations based on reducibility principle. The reference DMV uses P tb stop , which are computed directly on the treebanks. The results reported in previous works by Spitkovsky et al. 2012, and Mare\u010dek and\u017dabokrtsk\u00fd (2012) follows.", "cite_spans": [ { "start": 675, "end": 707, "text": "and Mare\u010dek and\u017dabokrtsk\u00fd (2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "7" }, { "text": "In future work, we would like to focus on unsupervised parsing without gold POS tags (see e.g. Spitkovsky et al. (2011a) and Christodoulopoulos et al. (2012) ). We suppose that many of the current works on unsupervised dependency parsers use gold POS tags only as a simplification of this task, and that the ultimate purpose of this effort is to develop a fully unsupervised induction of linguistic structure from raw texts that would be useful across many languages, domains, and applications.", "cite_spans": [ { "start": 125, "end": 157, "text": "Christodoulopoulos et al. (2012)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "CoNLL", "sec_num": null }, { "text": "The software which implements the algorithms described in this paper, together with P est stop estimations computed on Wikipedia texts, can be downloaded at http://ufal.mff.cuni.cz/\u02dcmarecek/udp/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoNLL", "sec_num": null }, { "text": "The adjacent-word baseline is a dependency tree in which each word is attached to the previous (or the following) word. The attachment score of 35.9% on all the WSJ test sentences was taken from(Blunsom and Cohn, 2010).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Pstop probability may be conditioned by additional parameters, such as adjacency adj or fringe word c f , which will be described in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Possibly the current state-of-the-art results. They were compared with many previous works.5 A possible competitive work may be the work by Blunsom and Cohn (2010), who reached 55% accuracy on English as well. However, they do not provide scores measured on other CoNLL treebanks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work has been supported by the AMALACH grant (DF12P01OVV02) of the Ministry of Culture of the Czech Republic.Data and some tools used as a prerequisite for the research described herein have been provided by the LINDAT/CLARIN Large Infrastructural project, No. LM2010013 of the Ministry of Education, Youth and Sports of the Czech Republic.We would like to thank Martin Popel, Zden\u011bk\u017dabokrtsk\u00fd, Rudolf Rosa, and three anonymous reviewers for many useful comments on the manuscript of this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Trainable grammars for speech recognition", "authors": [ { "first": "References", "middle": [], "last": "James", "suffix": "" }, { "first": "K", "middle": [], "last": "Baker", "suffix": "" } ], "year": 1979, "venue": "Speech communication papers presented at the 97th Meeting of the Acoustical Society", "volume": "", "issue": "", "pages": "547--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "References James K. Baker. 1979. Trainable grammars for speech recognition. In Speech communication papers presented at the 97th Meeting of the Acoustical Society, pages 547- 550.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Induction of linguistic structure with combinatory categorial grammars", "authors": [ { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2012, "venue": "The NAACL-HLT Workshop on the Induction of Linguistic Structure", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonatan Bisk and Julia Hockenmaier. 2012. Induction of lin- guistic structure with combinatory categorial grammars. The NAACL-HLT Workshop on the Induction of Linguistic Structure, page 90.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised induction of tree substitution grammars for dependency parsing", "authors": [ { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", "volume": "", "issue": "", "pages": "1204--1213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phil Blunsom and Trevor Cohn. 2010. Unsupervised induc- tion of tree substitution grammars for dependency pars- ing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10, pages 1204-1213, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TnT -A Statistical Part-of-Speech Tagger", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the sixth conference on Applied natural language processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants. 2000. TnT -A Statistical Part-of-Speech Tagger. Proceedings of the sixth conference on Applied natural language processing, page 8.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "CoNLL-X shared task on multilingual dependency parsing", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X '06", "volume": "", "issue": "", "pages": "149--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Computational Natural Lan- guage Learning, CoNLL-X '06, pages 149-164, Strouds- burg, PA, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction", "authors": [ { "first": "Christos", "middle": [], "last": "Christodoulopoulos", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure", "volume": "", "issue": "", "pages": "96--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Christodoulopoulos, Sharon Goldwater, and Mark Steedman. 2012. Turning the pipeline into a loop: Iter- ated unsupervised dependency parsing and PoS induction. In Proceedings of the NAACL-HLT Workshop on the In- duction of Linguistic Structure, pages 96-99, June.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "On the Shortest Arborescence of a Directed Graph", "authors": [ { "first": "Y", "middle": [ "J" ], "last": "Chu", "suffix": "" }, { "first": "T", "middle": [ "H" ], "last": "Liu", "suffix": "" } ], "year": 1965, "venue": "Science Sinica", "volume": "14", "issue": "", "pages": "1396--1400", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. J. Chu and T. H. Liu. 1965. On the Shortest Arborescence of a Directed Graph. Science Sinica, 14:1396-1400.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Logistic normal priors for unsupervised probabilistic grammar induction", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Gimpel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2008, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "321--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen, Kevin Gimpel, and Noah A. Smith. 2008. Logistic normal priors for unsupervised probabilistic grammar induction. In Neural Information Processing Systems, pages 321-328.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised structure prediction with non-parallel multilingual guidance", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "50--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with non-parallel mul- tilingual guidance. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 50-61, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sentence compression beyond word deletion", "authors": [ { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "137--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trevor Cohn and Mirella Lapata. 2008. Sentence compres- sion beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING '08, pages 137-144, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Defining dependencies (and constituents)", "authors": [ { "first": "Kim", "middle": [], "last": "Gerdes", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Kahane", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Dependency Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim Gerdes and Sylvain Kahane. 2011. Defining depen- dencies (and constituents). In Proceedings of Dependency Linguistics 2011, Barcelona.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Markov chain Monte Carlo in practice. Interdisciplinary statistics", "authors": [ { "first": "R", "middle": [], "last": "Walter", "suffix": "" }, { "first": "S", "middle": [], "last": "Gilks", "suffix": "" }, { "first": "David", "middle": [ "J" ], "last": "Richardson", "suffix": "" }, { "first": "", "middle": [], "last": "Spiegelhalter", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Walter R. Gilks, S. Richardson, and David J. Spiegelhalter. 1996. Markov chain Monte Carlo in practice. Interdisci- plinary statistics. Chapman & Hall.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Monte carlo sampling methods using markov chains and their applications", "authors": [ { "first": "W", "middle": [], "last": "", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hastings", "suffix": "" } ], "year": 1970, "venue": "Biometrika", "volume": "57", "issue": "1", "pages": "97--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Keith Hastings. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):pp. 97-109.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving unsupervised dependency parsing with richer contexts and smoothing", "authors": [ { "first": "P", "middle": [], "last": "William", "suffix": "" }, { "first": "Iii", "middle": [], "last": "Headden", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL '09", "volume": "", "issue": "", "pages": "101--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "William P. Headden III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of Hu- man Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Association for Computational Linguistics, NAACL '09, pages 101- 109, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpusbased induction of syntactic structure: models of dependency and constituency", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus- based induction of syntactic structure: models of depen- dency and constituency. In Proceedings of the 42nd An- nual Meeting on Association for Computational Linguis- tics, ACL '04, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Summarization beyond sentence extraction: a probabilistic approach to sentence compression", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2002, "venue": "Artif. Intell", "volume": "139", "issue": "1", "pages": "91--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Daniel Marcu. 2002. Summarization be- yond sentence extraction: a probabilistic approach to sen- tence compression. Artif. Intell., 139(1):91-107, July.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Dependency Parsing. Synthesis Lectures on Human Language Technologies", "authors": [ { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [ "T" ], "last": "Mcdonald", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra K\u00fcbler, Ryan T. McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Lan- guage Technologies. Morgan & Claypool Publishers.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modeling syntax of free word-order languages: Dependency analysis by reduction", "authors": [ { "first": "Mark\u00e9ta", "middle": [], "last": "Lopatkov\u00e1", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Pl\u00e1tek", "suffix": "" }, { "first": "Vladislav", "middle": [], "last": "Kubo\u0148", "suffix": "" } ], "year": 2005, "venue": "Lecture Notes in Artificial Intelligence, Proceedings of the 8th International Conference, TSD 2005", "volume": "3658", "issue": "", "pages": "140--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark\u00e9ta Lopatkov\u00e1, Martin Pl\u00e1tek, and Vladislav Kubo\u0148. 2005. Modeling syntax of free word-order languages: Dependency analysis by reduction. In V\u00e1clav Matou\u0161ek, Pavel Mautner, and Tom\u00e1\u0161 Pavelka, editors, Lecture Notes in Artificial Intelligence, Proceedings of the 8th Interna- tional Conference, TSD 2005, volume 3658 of Lecture Notes in Computer Science, pages 140-147, Berlin / Hei- delberg. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Language richness of the web", "authors": [ { "first": "Martin", "middle": [], "last": "Majli\u0161", "suffix": "" }, { "first": "", "middle": [], "last": "Zden\u011bk\u017eabokrtsk\u00fd", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC 2012)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Majli\u0161 and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Language richness of the web. In Proceedings of the Eight Interna- tional Conference on Language Resources and Evaluation (LREC 2012), Istanbul, Turkey, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Exploiting reducibility in unsupervised dependency parsing", "authors": [ { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "" }, { "first": "", "middle": [], "last": "Zden\u011bk\u017eabokrtsk\u00fd", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "12", "issue": "", "pages": "297--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mare\u010dek and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Exploiting reducibility in unsupervised dependency parsing. In Pro- ceedings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 297-307, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Multisource transfer of delexicalized dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "62--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi- source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Meth- ods in Natural Language Processing, pages 62-72, Edin- burgh, Scotland, UK., July. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Using universal linguistic knowledge to guide grammar induction", "authors": [ { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Harr", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10", "volume": "", "issue": "", "pages": "1234--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark John- son. 2010. Using universal linguistic knowledge to guide grammar induction. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Pro- cessing, EMNLP '10, pages 1234-1244, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Shared Task on Dependency Parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "915--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 Shared Task on Dependency Parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915-932, Prague, Czech Re- public, June. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Fast unsupervised dependency parsing with arc-standard transitions", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh Rasooli", "suffix": "" }, { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "" } ], "year": 2012, "venue": "Proceedings of ROBUS-UNSUP", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Heshaam Faili. 2012. Fast unsupervised dependency parsing with arc-standard tran- sitions. In Proceedings of ROBUS-UNSUP, pages 1-9.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Novel estimation methods for unsupervised discovery of latent structure in natural language text", "authors": [ { "first": "Noah Ashton", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah Ashton Smith. 2007. Novel estimation methods for unsupervised discovery of latent structure in natu- ral language text. Ph.D. thesis, Baltimore, MD, USA. AAI3240799.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "From ranked words to dependency trees: two-stage unsupervised non-projective dependency parsing", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2011, "venue": "Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing", "volume": "", "issue": "", "pages": "60--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2011. From ranked words to dependency trees: two-stage unsupervised non-projective dependency parsing. In Proceedings of TextGraphs-6: Graph-based Methods for Natural Language Processing, TextGraphs- 6, pages 60-68, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "From baby steps to leapfrog: how \"less is more\" in unsupervised dependency parsing", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10", "volume": "", "issue": "", "pages": "751--759", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: how \"less is more\" in unsupervised dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 751-759, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Unsupervised dependency parsing without gold part-of-speech tags", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Angel", "middle": [ "X" ], "last": "Alshawi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011a. Unsupervised dependency pars- ing without gold part-of-speech tags. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Punctuation: Making a point in unsupervised dependency parsing", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Juraf- sky. 2011b. Punctuation: Making a point in unsuper- vised dependency parsing. In Proceedings of the Fifteenth Conference on Computational Natural Language Learn- ing (CoNLL-2011).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Three Dependency-and-Boundary Models for Grammar Induction", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Juraf- sky. 2012. Three Dependency-and-Boundary Models for Grammar Induction. In Proceedings of the 2012 Con- ference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learning (EMNLP-CoNLL 2012).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Example of a dependency tree. Sequences of words that can be reduced are underlined.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Example of reducible sequences of words found in a large corpus.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Comparison of P est stop probabilities estimated from raw Wikipedia corpora (y-axis) and of P tb stop probabilities computed from CoNLL treebanks (x-axis). The area of each point shows the relative frequency of an individual tag.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "text": "", "html": null, "type_str": "table", "num": null, "content": "
: Wikipedia texts statistics: total number of
tokens and number of reducible sequences found
in them.
" }, "TABREF3": { "text": "Attachment scores on CoNLL 2006 and 2007 data. Standard deviations are provided in brackets. DMV model using standard P dmv stop probability is compared with DMV with P dmv +est stop", "html": null, "type_str": "table", "num": null, "content": "" } } } }