{ "paper_id": "Q19-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:09:36.540626Z" }, "title": "A Generative Model for Punctuation in Dependency Trees", "authors": [ { "first": "Lisa", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "\u21e4", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Dingquan", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Treebanks traditionally treat punctuation marks as ordinary words, but linguists have suggested that a tree's \"true\" punctuation marks are not observed (Nunberg, 1990). These latent \"underlying\" marks serve to delimit or separate constituents in the syntax tree. When the tree's yield is rendered as a written sentence, a string rewriting mechanism transduces the underlying marks into \"surface\" marks, which are part of the observed (surface) string but should not be regarded as part of the tree. We formalize this idea in a generative model of punctuation that admits efficient dynamic programming. We train it without observing the underlying marks, by locally maximizing the incomplete data likelihood (similarly to the EM algorithm). When we use the trained model to reconstruct the tree's underlying punctuation, the results appear plausible across 5 languages, and in particular are consistent with Nunberg's analysis of English. We show that our generative model can be used to beat baselines on punctuation restoration. Also, our reconstruction of a sentence's underlying punctuation lets us appropriately render the surface punctuation (via our trained underlying-tosurface mechanism) when we syntactically transform the sentence.", "pdf_parse": { "paper_id": "Q19-1023", "_pdf_hash": "", "abstract": [ { "text": "Treebanks traditionally treat punctuation marks as ordinary words, but linguists have suggested that a tree's \"true\" punctuation marks are not observed (Nunberg, 1990). These latent \"underlying\" marks serve to delimit or separate constituents in the syntax tree. When the tree's yield is rendered as a written sentence, a string rewriting mechanism transduces the underlying marks into \"surface\" marks, which are part of the observed (surface) string but should not be regarded as part of the tree. We formalize this idea in a generative model of punctuation that admits efficient dynamic programming. We train it without observing the underlying marks, by locally maximizing the incomplete data likelihood (similarly to the EM algorithm). When we use the trained model to reconstruct the tree's underlying punctuation, the results appear plausible across 5 languages, and in particular are consistent with Nunberg's analysis of English. We show that our generative model can be used to beat baselines on punctuation restoration. Also, our reconstruction of a sentence's underlying punctuation lets us appropriately render the surface punctuation (via our trained underlying-tosurface mechanism) when we syntactically transform the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Punctuation enriches the expressiveness of written language. When converting from spoken to written language, punctuation indicates pauses or pitches; expresses propositional attitude; and is conventionally associated with certain syntactic constructions such as apposition, parenthesis, quotation, and conjunction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a latent-variable model of punctuation usage, inspired by the rulebased approach to English punctuation of Nunberg (1990) . Training our model on English data \u21e4 Equal contribution. learns rules that are consistent with Nunberg's hand-crafted rules. Our system is automatic, so we use it to obtain rules for Arabic, Chinese, Spanish, and Hindi as well.", "cite_spans": [ { "start": 133, "end": 147, "text": "Nunberg (1990)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Moreover, our rules are stochastic, which allows us to reason probabilistically about ambiguous or missing punctuation. Across the 5 languages, our model predicts surface punctuation better than baselines, as measured both by perplexity ( \u00a74) and by accuracy on a punctuation restoration task ( \u00a7 6.1). We also use our model to correct the punctuation of non-native writers of English ( \u00a7 6.2), and to maintain natural punctuation style when syntactically transforming English sentences ( \u00a7 6.3) . In principle, our model could also be used within a generative parser, allowing the parser to evaluate whether a candidate tree truly explains the punctuation observed in the input sentence ( \u00a78).", "cite_spans": [ { "start": 487, "end": 495, "text": "( \u00a7 6.3)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Punctuation is interesting In The Linguistics of Punctuation, Nunberg (1990) argues that punctuation (in English) is more than a visual counterpart of spoken-language prosody, but forms a linguistic system that involves \"interactions of point indicators (i.e. commas, semicolons, colons, periods and dashes).\" He proposes that much as in phonology (Chomsky and Halle, 1968 ), a grammar generates underlying punctuation which then transforms into the observed surface punctuation.", "cite_spans": [ { "start": 62, "end": 76, "text": "Nunberg (1990)", "ref_id": "BIBREF49" }, { "start": 348, "end": 372, "text": "(Chomsky and Halle, 1968", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider generating a sentence from a syntactic grammar as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hail the king [, Arthur Pendragon ,] [, who wields [ \" Excalibur \" ] ,] .", "cite_spans": [ { "start": 14, "end": 36, "text": "[, Arthur Pendragon ,]", "ref_id": null }, { "start": 51, "end": 68, "text": "[ \" Excalibur \" ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the full tree is not depicted here, some of the constituents are indicated with brackets. In this underlying generated tree, each appositive NP is surrounded by commas. On the surface, however, the two adjacent commas after Pendragon will now be collapsed into one, and the final comma will be absorbed into the adjacent period. Furthermore, in American English, the typographic convention is to move the final punctuation inside the quotation marks. Thus a reader sees only this modified surface form of the sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hail the king, Arthur Pendragon, who wields \"Excalibur.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that these modifications are string transformations that do not see or change the tree. The resulting surface punctuation marks may be clues to the parse tree, but (contrary to NLP convention) they should not be included as nodes in the parse tree. Only the underlying marks play that role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Punctuation is meaningful Pang et al. (2002) use question and exclamation marks as clues to sentiment. Similarly, quotation marks may be used to mark titles, quotations, reported speech, or dubious terminology (University of Chicago, 2010) . Because of examples like this, methods for determining the similarity or meaning of syntax trees, such as a tree kernel (Agarwal et al., 2011) or a recursive neural network (Tai et al., 2015) , should ideally be able to consider where the underlying punctuation marks attach.", "cite_spans": [ { "start": 26, "end": 44, "text": "Pang et al. (2002)", "ref_id": "BIBREF50" }, { "start": 225, "end": 239, "text": "Chicago, 2010)", "ref_id": null }, { "start": 362, "end": 384, "text": "(Agarwal et al., 2011)", "ref_id": "BIBREF0" }, { "start": 415, "end": 433, "text": "(Tai et al., 2015)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Punctuation is helpful Surface punctuation remains correlated with syntactic phrase structure. NLP systems for generating or editing text must be able to deploy surface punctuation as human writers do. Parsers and grammar induction systems benefit from the presence of surface punctuation marks (Jones, 1994; Spitkovsky et al., 2011) . It is plausible that they could do better with a linguistically informed model that explains exactly why the surface punctuation appears where it does. Patterns of punctuation usage can also help identify the writer's native language (Markov et al., 2018) .", "cite_spans": [ { "start": 295, "end": 308, "text": "(Jones, 1994;", "ref_id": "BIBREF24" }, { "start": 309, "end": 333, "text": "Spitkovsky et al., 2011)", "ref_id": "BIBREF56" }, { "start": 570, "end": 591, "text": "(Markov et al., 2018)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Punctuation is neglected Work on syntax and parsing tends to treat punctuation as an afterthought rather than a phenomenon governed by its own linguistic principles. Treebank annotation guidelines for punctuation tend to adopt simple heuristics like \"attach to the highest possible node that preserves projectivity\" (Bies et al., 1995; Nivre et al., 2018) . 1 Many dependency parsing works exclude punctuation from evaluation (Nivre et al., 2007b; Koo and Collins, 2010; Chen and Manning, 2014; Lei et al., 2014; Kiperwasser and Goldberg, 2016) , although some others retain punctuation (Nivre et al., 2007a; Goldberg and Elhadad, 2010; Dozat and Manning, 2017 ", "cite_spans": [ { "start": 316, "end": 335, "text": "(Bies et al., 1995;", "ref_id": "BIBREF4" }, { "start": 336, "end": 355, "text": "Nivre et al., 2018)", "ref_id": "BIBREF48" }, { "start": 358, "end": 359, "text": "1", "ref_id": null }, { "start": 426, "end": 447, "text": "(Nivre et al., 2007b;", "ref_id": "BIBREF47" }, { "start": 448, "end": 470, "text": "Koo and Collins, 2010;", "ref_id": "BIBREF29" }, { "start": 471, "end": 494, "text": "Chen and Manning, 2014;", "ref_id": "BIBREF6" }, { "start": 495, "end": 512, "text": "Lei et al., 2014;", "ref_id": "BIBREF31" }, { "start": 513, "end": 544, "text": "Kiperwasser and Goldberg, 2016)", "ref_id": "BIBREF26" }, { "start": 587, "end": 608, "text": "(Nivre et al., 2007a;", "ref_id": "BIBREF46" }, { "start": 609, "end": 636, "text": "Goldberg and Elhadad, 2010;", "ref_id": "BIBREF20" }, { "start": 637, "end": 660, "text": "Dozat and Manning, 2017", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NOISYCHANNEL u 0 u 1 u 2 u 3 u 4 x 0 x 1 x 2 x 3 x 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\" Dale \" means \" river valley \" .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\" Dale \" means \" river valley . \" root. nsubj \" \" dobj \" \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: The generative story of a sentence. Given an unpunctuated tree T at top, at each node w 2 T , the ATTACH process stochastically attaches a left puncteme l and a right puncteme r, which may be empty. The resulting tree T 0 has underlying punctuation u. Each slot's punctuation u i 2 u is rewritten to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x i 2 x by NOISYCHANNEL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In tasks such as word embedding induction (Mikolov et al., 2013; Pennington et al., 2014) and machine translation (Zens et al., 2002) , punctuation marks are usually either removed or treated as ordinary words (\u0158eh\u016f\u0159ek and Sojka, 2010 ).", "cite_spans": [ { "start": 42, "end": 64, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF41" }, { "start": 65, "end": 89, "text": "Pennington et al., 2014)", "ref_id": "BIBREF52" }, { "start": 114, "end": 133, "text": "(Zens et al., 2002)", "ref_id": "BIBREF64" }, { "start": 210, "end": 234, "text": "(\u0158eh\u016f\u0159ek and Sojka, 2010", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Yet to us, building a parse tree on a surface sentence seems as inappropriate as morphologically segmenting a surface word. In both cases, one should instead analyze the latent underlying form, jointly with recovering that form. For example, the proper segmentation of English hoping is not hop-ing but hope-ing (with underlying e), and the proper segmentation of stopping is neither stopp-ing nor stop-ping but stop-ing (with only one underlying p). Cotterell et al. (2015 Cotterell et al. ( , 2016 get this right for morphology. We attempt to do the same for punctuation.", "cite_spans": [ { "start": 451, "end": 473, "text": "Cotterell et al. (2015", "ref_id": "BIBREF9" }, { "start": 474, "end": 499, "text": "Cotterell et al. ( , 2016", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a probabilistic generative model of sentences ( Figure 1) :", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 68, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(x) = P T,T 0 p syn (T ) \u2022 p \u2713 (T 0 | T ) \u2022 p (x |\u016b(T 0 ))", "eq_num": "(1)" } ], "section": "Formal Model", "sec_num": "2" }, { "text": "First, an unpunctuated dependency tree T is stochastically generated by some recursive process p syn (e.g., Eisner, 1996, Model C) . 2 Second, each constituent (i.e., dependency subtree) sprouts optional underlying punctuation at its left and right edges, according to a probability distribution p \u2713 that depends on the constituent's syntactic role (e.g., dobj for \"direct object\"). This punctuated tree T 0 yields the underlying string\u016b =\u016b(T 0 ), which is edited by a finite-state noisy channel p to arrive at the surface sentencex.", "cite_spans": [ { "start": 108, "end": 130, "text": "Eisner, 1996, Model C)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "This third step may alter the sequence of punctuation tokens at each slot between words-for example, in \u00a71, collapsing the double comma , , between Pendragon and who. u and x denote just the punctuation at the slots of\u016b andx respectively, with u i and x i denoting the punctuation token sequences at the i th slot. Thus, the transformation at the i th slot is u i 7 ! x i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "Since this model is generative, we could train it without any supervision to explain the observed surface stringx: maximize the likelihood p(x) in (1), marginalizing out the possible T, T 0 values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "In the present paper, however, we exploit known T values (as observed in the \"depunctuated\" version of a treebank). Because T is observed, we can jointly train \u2713, to maximize just", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "p(x | T ) = X T 0 p \u2713 (T 0 | T ) \u2022 p (x | u(T 0 )) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "That is, the p syn model that generated T becomes irrelevant, but we still try to predict what surface punctuation will be added to T . We still marginalize over the underlying punctuation marks u. These are never observed, but they must explain the surface punctuation marks x ( \u00a7 2.2), and they must be explained in turn by the syntax tree T ( \u00a7 2.1). The trained generative model then lets us restore or correct punctuation in new trees T ( \u00a76).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formal Model", "sec_num": "2" }, { "text": "The ATTACH model characterizes the probability of an underlying punctuated tree T 0 given its corresponding unpunctuated tree T , which is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u2713 (T 0 | T ) = Y w2T p \u2713 (l w , r w | w)", "eq_num": "(3)" } ], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "where l w , r w 2 V are the left and right punctemes that T 0 attaches to the tree node w. Each puncteme (Krahn, 2014) in the finite set V is a string of 0 or more underlying punctuation tokens. 3 The probability p \u2713 (l, r | w) is given by a log-linear model 3 Multi-token punctemes are occasionally useful. For example, the puncteme ... might consist of either 1 or 3 tokens, depending on how the tokenizer works; similarly, the puncteme ?! might consist of 1 or 2 tokens. Also, if a single constituent of T gets surrounded by both parentheses and quotation marks, this gives rise to punctemes (\" and \"). (A better treatment would add the parentheses as a separate puncteme pair at a unary node above the quotation marks, but that would have required T 0 to introduce this extra node.) 1. Point Absorption 3. Period Absorption \" 7 !, ,. 7 !. -, 7 !-.? 7 !? .! 7 !! -; 7 !; ;. 7 !.", "cite_spans": [ { "start": 105, "end": 118, "text": "(Krahn, 2014)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "abbv. 7 !abbv 2. Quote Transposition 4. Bracket Absorptions \", 7 !,\" \". 7 !.\" ,) 7 !) -) 7 !) (, 7 !( ,\" 7 !\" \", 7 !\" Table 1 : Some of Nunberg's punctuation interaction rules in English, in priority order. The absorption rules ensure that when there are two adjacent tokens, the \"weaker\" one is deleted (where the strength ordering is {?, !, (, ), \", \"} > . > {;, :} > -> ,), except that bracketing tokens such as () and \"\" do not absorb tokens outside the material they bracket.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "p \u2713 (l, r|w) / ( exp \u2713 > f (l, r, w) if (l, r) 2 W d(w) 0 otherwise (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "where V is the finite set of possible punctemes and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "W d \u2713 V 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "gives the possible puncteme pairs for a node w that has dependency relation d = d(w) to its parent. V and W d are estimated heuristically from the tokenized surface data ( \u00a74). f (l, r, w) is a sparse binary feature vector, and \u2713 is the corresponding parameter vector of feature weights. The feature templates in Appendix A 4 consider the symmetry between l and r, and their compatibility with (a) the POS tag of w's head word, (b) the dependency paths connecting w to its children and the root of T , (c) the POS tags of the words flanking the slots containing l and r, (d) surface punctuation already added to w's subconstituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Underlying Punctuation", "sec_num": "2.1" }, { "text": "From the tree T 0 , we can read off the sequence of underlying punctuation tokens u i at each slot i between words. Namely, u i concatenates the right punctemes of all constituents ending at i with the left punctemes of all constituents starting at i (as illustrated by the examples in \u00a71 and Figure 1 ). The NOISYCHANNEL model then transduces u i to a surface token sequence x i , for each i = 0, . . . , n independently (where n is the sentence length).", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 301, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "Nunberg's formalism Much like Chomsky and Halle's (1968) phonological grammar of English, Nunberg's (1990) descriptive English punctuation grammar (Table 1 ) can be viewed computationally as a priority string rewriting system, or Markov algorithm (Markov, 1960; Caracciolo di Forino, 1968) . The system begins with a token string u.", "cite_spans": [ { "start": 30, "end": 56, "text": "Chomsky and Halle's (1968)", "ref_id": "BIBREF7" }, { "start": 90, "end": 106, "text": "Nunberg's (1990)", "ref_id": "BIBREF49" }, { "start": 247, "end": 261, "text": "(Markov, 1960;", "ref_id": "BIBREF39" }, { "start": 262, "end": 289, "text": "Caracciolo di Forino, 1968)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 147, "end": 155, "text": "(Table 1", "ref_id": null } ], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "abcde . ab 7 ! ab abcde . bc 7 ! b a bde . bd 7 ! db a dbe . be 7 ! e a d e Figure 2 : Editing abcde 7 ! ade with a sliding window. (When an absorption rule maps 2 tokens to 1, our diagram leaves blank space that is not part of the output string.) At each step, the left-to-right process has already committed to the green tokens as output; has not yet looked at the blue input tokens; and is currently considering how to (further) rewrite the black tokens. The right column shows the chosen edit.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "At each step it selects the highest-priority local rewrite rule that can apply, and applies it as far left as possible. When no more rules can apply, the final state of the string is returned as x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "Simplifying the formalism Markov algorithms are Turing complete. Fortunately, Johnson (1972) noted that in practice, phonological u 7 ! x maps described in this formalism can usually be implemented with finite-state transducers (FSTs).", "cite_spans": [ { "start": 78, "end": 92, "text": "Johnson (1972)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "For computational simplicity, we will formulate our punctuation model as a probabilistic FST (PFST)-a locally normalized left-to-right rewrite model (Cotterell et al., 2014) . The probabilities for each language must be learned, using gradient descent. Normally we expect most probabilities to be near 0 or 1, making the PFST nearly deterministic (i.e., close to a subsequential FST). However, permitting low-probability choices remains useful to account for typographical errors, dialectal differences, and free variation in the training corpus.", "cite_spans": [ { "start": 149, "end": 173, "text": "(Cotterell et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "Our PFST generates a surface string, but the invertibility of FSTs will allow us to work backwards when analyzing a surface string ( \u00a73).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "A sliding-window model Instead of having rule priorities, we apply Nunberg-style rules within a 2-token window that slides over u in a single leftto-right pass (Figure 2 ). Conditioned on the current window contents ab, a single edit is selected stochastically: either ab 7 ! ab (no change), ab 7 ! b (left absorption), ab 7 ! a (right absorption), or ab 7 ! ba (transposition). Then the window slides rightward to cover the next input token, together with the token that is (now) to its left. a and b are always real tokens, never boundary symbols. specifies the conditional edit probabilities. 5 These specific edit rules (like Nunberg's) cannot insert new symbols, nor can they delete all of the underlying symbols. Thus, surface x i is a good clue to u i : all of its tokens must appear underlyingly, and if x i = \u270f (the empty string) then u i = \u270f.", "cite_spans": [ { "start": 596, "end": 597, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 160, "end": 169, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "The model can be directly implemented as a PFST (Appendix D 4 ) using Cotterell et al.'s (2014) more general PFST construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "Our single-pass formalism is less expressive than Nunberg's. It greedily makes decisions based on at most one token of right context (\"label bias\"). It cannot rewrite '\".7 !.'\" or \",.7 !.\" because the . is encountered too late to percolate leftward; luckily, though, we can handle such English examples by sliding the window right-to-left instead of left-to-right. We treat the sliding direction as a language-specific parameter. 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Underlying to Surface", "sec_num": "2.2" }, { "text": "Building on equation 2, we train \u2713, to locally maximize the regularized conditional log-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": "likelihood \u21e3 X x,T log p(x | T ) \u21e0 \u2022 E T 0 [c(T 0 )] 2 \u2318 & \u2022 ||\u2713|| 2 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": "where the sum is over a training treebank. 7 The expectation", "cite_spans": [ { "start": 43, "end": 44, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": "E [\u2022 \u2022 \u2022 ] is over T 0 \u21e0 p(\u2022 | T, x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": ". This generalized expectation term provides posterior regularization (Mann and McCallum, 2010; Ganchev et al., 2010) , by encouraging parameters that reconstruct trees T 0 that use symmetric punctuation marks in a \"typical\" way. The function c(T 0 ) counts the nodes in T 0 whose punctemes contain \"unmatched\" symmetric punctuation tokens: for example, ) is \"matched\" only when it appears in a right puncteme with ( at the comparable position in the same constituent's left puncteme. The precise definition is given in Appendix B. 4 porated this insight would not have to learn O(|\u2303| 2 ) separate absorption probabilities (two per bigram ab), but only O(|\u2303|) strengths (one per unigram a, which may be regarded as a 1-dimensional embedding of the punctuation token a). We figured that the punctuation vocabulary \u2303 was small enough (Table 2 ) that we could manage without the additional complexity of embeddings or other featurization, although this does presumably hurt our generalization to rare bigrams. 6 We could have handled all languages uniformly by making 2 passes of the sliding window (via a composition of 2 PFSTs), with at least one pass in each direction. 7 In retrospect, there was no good reason to square the ET 0 [c(T 0 )] term. However, when we started redoing the experiments, we found the results essentially unchanged.", "cite_spans": [ { "start": 70, "end": 95, "text": "(Mann and McCallum, 2010;", "ref_id": "BIBREF38" }, { "start": 96, "end": 117, "text": "Ganchev et al., 2010)", "ref_id": "BIBREF18" }, { "start": 1007, "end": 1008, "text": "6", "ref_id": null }, { "start": 1170, "end": 1171, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 832, "end": 840, "text": "(Table 2", "ref_id": null } ], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": "In our development experiments on English, the posterior regularization term was necessary to discover an aesthetically appealing theory of underlying punctuation. When we dropped this term (\u21e0 = 0) and simply maximized the ordinary regularized likelihood, we found that the optimization problem was underconstrained: different training runs would arrive at different, rather arbitrary underlying punctemes. For example, one training run learned an ATTACH model that used underlying \". to terminate sentences, along with a NOISY-CHANNEL model that absorbed the left quotation mark into the period. By encouraging the underlying punctuation to be symmetric, we broke the ties. We also tried making this a hard constraint (\u21e0 = 1), but then the model was unable to explain some of the training sentences at all, giving them probability of 0. For example, I went to the \" special place \" cannot be explained, because special place is not a constituent. 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Objective", "sec_num": "2.3" }, { "text": "In principle, working with the model (1) is straightforward, thanks to the closure properties of formal languages. Provided that p syn can be encoded as a weighted CFG, it can be composed with the weighted tree transducer p \u2713 and the weighted FST p to yield a new weighted CFG (similarly to Bar-Hillel et al., 1961; Nederhof and Satta, 2003) . Under this new grammar, one can recover the optimal T, T 0 forx by dynamic programming, or sum over T, T 0 by the inside algorithm to get the likelihood p(x). A similar approach was used by Levy (2008) with a different FST noisy channel.", "cite_spans": [ { "start": 291, "end": 315, "text": "Bar-Hillel et al., 1961;", "ref_id": "BIBREF2" }, { "start": 316, "end": 341, "text": "Nederhof and Satta, 2003)", "ref_id": "BIBREF42" }, { "start": 534, "end": 545, "text": "Levy (2008)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "In this paper we assume that T is observed, allowing us to work with equation 2. This cuts the computation time from O(n 3 ) to O(n). 9 Whereas the inside algorithm for (1) must consider O(n 2 ) possible constituents ofx and O(n) ways of building each, our algorithm for (2) only needs to iterate over the O(n) true constituents of T and the 1 true way of building each. However, it must still consider the |W d | puncteme pairs for each constituent. 8 Recall that the NOISYCHANNEL model family ( \u00a7 2.2) requires the surface \" before special to appear underlyingly, and also requires the surface \u270f after special to be empty underlyingly. These hard constraints clash with the \u21e0 = 1 hard constraint that the punctuation around special must be balanced. The surface \" after place causes a similar problem: no edge can generate the matching underlying \". 9 We do O(n) multiplications of N \u21e5 N matrices where Algorithm 1 The algorithm for scoring a given (T, x) pair. The code in blue is used during training to get the posterior regularization term in (5).", "cite_spans": [ { "start": 451, "end": 452, "text": "8", "ref_id": null }, { "start": 852, "end": 853, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "Input: T , x . Training pair (omits T 0 , u) Output: p(x | T ), E [c(T 0 )] 1: procedure TOTALSCORE(T , x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "2: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "for i = 1 to n do 3: compute WFSA (M i , i , \u21e2 i ) 4: E 0 . exp.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "M left ( Q w 0 2leftkids(w) IN(w 0 ))\u21e2 j 1 9: M right > j ( Q w 0 2rightkids(w) IN(w 0 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "M 0 M left \u2022 1 \u2022 M right . R N j \u21e51 , R 1\u21e5N j 11: M 0 . R N i \u21e5N k 12: for (l, r) 2 W d(w) do 13: p p \u2713 (l, r | w) 14: M M + p \u2022 M i (l)M 0 M k (r)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "E E + p \u2022 l,r have unmatched punc 16: return M . R N i \u21e5N k 17: M root IN(root(T ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "18:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "return > 0 M root \u21e2 n , E . R, R", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "3" }, { "text": "Given an input sentencex of length n, our job is to sum over possible trees T 0 that are consistent with T andx, or to find the best such T 0 . This is roughly a lattice parsing problem-made easier by knowing T . However, the possible\u016b values are characterized not by a lattice but by a cyclic WFSA (as |u i | is unbounded whenever |x i | > 0). For each slot 0 \uf8ff i \uf8ff n, transduce the surface punctuation string x i by the inverted PFST for p to obtain a weighted finite-state automaton (WFSA) that describes all possible underlying strings u i . 10 This WFSA accepts each possible u i with weight p (x i | u i ). If it has N i states, we can represent it (Berstel and Reutenauer, 1988 ) with a family of sparse weight matrices M i ( ) 2 R N i \u21e5N i , whose element at row s and column t is the weight of the s ! t arc labeled with , or 0 if there is no such arc. Additional vectors i , \u21e2 i 2 R N i specify the initial and final weights. ( i is one-hot if the PFST has a single initial state, of weight 1.)", "cite_spans": [ { "start": 655, "end": 684, "text": "(Berstel and Reutenauer, 1988", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "For any puncteme l (or r) in V, we define", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "M i (l) = M i (l 1 )M i (l 2 ) \u2022 \u2022 \u2022 M i (l |l| )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": ", a product over the 0 or more tokens in l. This gives the total weight of all s ! \u21e4 t WFSA paths labeled with l. The subprocedure in Algorithm 1 essentially extends this to obtain a new matrix IN(w) 2 R N i \u21e5N k , where the subtree rooted at w stretches from slot i to slot k. Its element IN(w) st gives the total weight of all extended paths in the\u016b WFSA from state s at slot i to state t at slot k. An extended path is defined by a choice of underlying punctemes at w and all its descendants. These punctemes determine an s-to-final path at i, then initial-to-final paths at i + 1 through k 1, then an initial-to-t path at k. The weight of the extended path is the product of all the WFSA weights on these paths (which correspond to transition probabilities in p PFST) times the probability of the choice of punctemes (from p \u2713 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "This inside algorithm computes quantities needed for training ( \u00a7 2.3). Useful variants arise via well-known methods for weighted derivation forests (Berstel and Reutenauer, 1988; Goodman, 1999; Li and Eisner, 2009; Eisner, 2016) .", "cite_spans": [ { "start": 149, "end": 179, "text": "(Berstel and Reutenauer, 1988;", "ref_id": "BIBREF3" }, { "start": 180, "end": 194, "text": "Goodman, 1999;", "ref_id": "BIBREF21" }, { "start": 195, "end": 215, "text": "Li and Eisner, 2009;", "ref_id": "BIBREF34" }, { "start": 216, "end": 229, "text": "Eisner, 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "Specifically, to modify Algorithm 1 to maximize over T 0 values ( \u00a7 \u00a7 6.2-6.3) instead of summing over them, we switch to the derivation semiring (Goodman, 1999) , as follows. Whereas IN(w) st used to store the total weight of all extended paths from state s at slot i to state t at slot j, now it will store the weight of the best such extended path. It will also store that extended path's choice of underlying punctemes, in the form of a punctemeannotated version of the subtree of T that is rooted at w. This is a potential subtree of T 0 .", "cite_spans": [ { "start": 146, "end": 161, "text": "(Goodman, 1999)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "Thus, each element of IN(w) has the form (r, D) where r 2 R and D is a tree. We define addition and multiplication over such pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(r, D) + (r 0 , D 0 ) = ( (r, D) if r > r 0 (r 0 , D 0 ) otherwise (6) (r, D) \u2022 (r 0 , D 0 ) = (rr 0 , DD 0 )", "eq_num": "(7)" } ], "section": "Algorithms", "sec_num": "3.1" }, { "text": "where DD 0 denotes an ordered combination of two trees. Matrix products UV and scalar-matrix products p \u2022 V are defined in terms of element addition and multiplication as usual:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(UV) st = P r U sr \u2022 V rt (8) (p \u2022 V) st = p \u2022 V st", "eq_num": "(9)" } ], "section": "Algorithms", "sec_num": "3.1" }, { "text": "What is DD 0 ? For presentational purposes, it is convenient to represent a punctuated dependency tree as a bracketed string. For example, the underlying tree T 0 in Figure 1 would be [ [\" Dale \"] means [\" [ river ] valley \"] ] where the words correspond to nodes of T . In this case, we can represent every D as a partial bracketed string and define DD 0 by string concatenation. This presentation ensures that multiplication (7) is a complete and associative (though not commutative) operation, as in any semiring. As base cases, each real-valued element of M i (l) or M k (r) is now paired with the string [l or r] respectively, 11 and the real number 1 at line 10 is paired with the string w. The real-valued elements of the i and \u21e2 i vectors and the 0 matrix at line 11 are paired with the empty string \u270f, as is the real number p at line 13.", "cite_spans": [ { "start": 184, "end": 196, "text": "[ [\" Dale \"]", "ref_id": null }, { "start": 203, "end": 215, "text": "[\" [ river ]", "ref_id": null } ], "ref_spans": [ { "start": 166, "end": 174, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "In practice, the D strings that appear within the matrix M of Algorithm 1 will always represent complete punctuated trees. Thus, they can actually be represented in memory as such, and different trees may share subtrees for efficiency (using pointers). The product in line 10 constructs a matrix of trees with root w and differing sequences of left/right children, while the product in line 14 annotates those trees with punctemes l, r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "To sample a possible T 0 from the derivation forest in proportion to its probability ( \u00a7 6.1), we use the same algorithm but replace equation 6with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "(r, D) + (r 0 , D 0 ) = ( (r + r 0 , D) if u < r r+r 0 (r + r 0 , D 0 ) otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "with u \u21e0 Uniform(0, 1) being a random number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms", "sec_num": "3.1" }, { "text": "Having computed the objective (5), we find the gradient via automatic differentiation, and optimize \u2713, via Adam (Kingma and Ba, 2014)-a variant of stochastic gradient decent-with learning rate 0.07, batchsize 5, sentence per epoch 400, and L2 regularization. (These hyperparameters, along with the regularization coefficients & and \u21e0 from equation (5), were tuned on dev data ( \u00a74) for each language respectively.) We train 11 We still construct the real matrix Mi(l) by ordinary matrix multiplication before pairing its elements with strings. This involves summation of real numbers: each element of the resulting real matrix is a marginal probability, which sums over possible PFST paths (edit sequences) that could map the underlying puncteme l to a certain substring of the surface slot xi. Similarly for M k (r).", "cite_spans": [ { "start": 424, "end": 426, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.2" }, { "text": "the punctuation model for 30 epochs. The initial NOISYCHANNEL parameters ( ) are drawn from N (0, 1), and the initial ATTACH parameters (\u2713) are drawn from N (0, 1) (with one minor exception described in Appendix A).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.2" }, { "text": "Data. Throughout \u00a7 \u00a74-6, we will examine the punctuation model on a subset of the Universal Dependencies (UD) version 1.4 (Nivre et al., 2016 )-a collection of dependency treebanks across 47 languages with unified POS-tag and dependency label sets. Each treebank has designated training, development, and test portions. We experiment on Arabic, English, Chinese, Hindi, and Spanish (Table 2) -languages with diverse punctuation vocabularies and punctuation interaction rules, not to mention script directionality. For each treebank, we use the tokenization provided by UD, and take the punctuation tokens (which may be multi-character, such as ...) to be the tokens with the PUNCT tag. We replace each straight double quotation mark \" with either \" or \" as appropriate, and similarly for single quotation marks. 12 We split each non-punctuation token that ends in . (such as etc.) into a shorter non-punctuation token (etc) followed by a special punctuation token called the \"abbreviation dot\" (which is distinct from a period). We prepend a special punctuation mark\u02c6to every sentencex, which can serve to absorb an initial comma, for example. 13 We then replace each token with the special symbol UNK if its type appeared fewer than 5 times in the training portion. This gives the surface sentences.", "cite_spans": [ { "start": 122, "end": 141, "text": "(Nivre et al., 2016", "ref_id": null }, { "start": 812, "end": 814, "text": "12", "ref_id": null }, { "start": 1144, "end": 1146, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 382, "end": 391, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Intrinsic Evaluation of the Model", "sec_num": "4" }, { "text": "To estimate the vocabulary V of underlying punctemes, we simply collect all surface token sequences x i that appear at any slot in the training portion of the processed treebank. This is a generous estimate. Similarly, we estimate W d ( \u00a7 2.1) as all pairs (l, r) 2 V 2 that flank any d constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intrinsic Evaluation of the Model", "sec_num": "4" }, { "text": "Recall that our model generates surface punctuation given an unpunctuated dependency tree. We train it on each of the 5 languages independently. We evaluate on conditional perplexity, which will be low if the trained model successfully assigns a high probability to the actual surface punctuation in a held-out corpus of the same language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intrinsic Evaluation of the Model", "sec_num": "4" }, { "text": "12 For en and en_esl, \" and \" are distinguished by language-specific part-of-speech tags. For the other 4 languages, we identify two \" dependents of the same head word, Table 2 : Statistics of our datasets. \"Treebank\" is the UD treebank identifier, \"#Token\" is the number of tokens, \"%Punct\" is the percentage of punctuation tokens, \"#Omit\" is the small number of sentences containing non-leaf punctuation tokens (see footnote 19), and \"#Type\" is the number of punctuation types after preprocessing. (Recall from \u00a74 that preprocessing distinguishes between left and right quotation mark types, and between abbreviation dot and period dot types.)", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Intrinsic Evaluation of the Model", "sec_num": "4" }, { "text": "Baselines. We compare our model against three baselines to show that its complexity is necessary. Our first baseline is an ablation study that does not use latent underlying punctuation, but generates the surface punctuation directly from the tree. (To implement this, we fix the parameters of the noisy channel so that the surface punctuation equals the underlying with probability 1.) If our full model performs significantly better, it will demonstrate the importance of a distinct underlying layer. Our other two baselines ignore the tree structure, so if our full model performs significantly better, it will demonstrate that conditioning on explicit syntactic structure is useful. These baselines are based on previously published approaches that reduce the problem to tagging: Xu et al. (2016) use a BiLSTM-CRF tagger with bigram topology; Tilk and Alum\u00e4e (2016) use a BiGRU tagger with attention. In both approaches, the model is trained to tag each slot i with the correct string x i 2 V \u21e4 (possibly \u270f or\u02c6). These are discriminative probabilistic models (in contrast to our generative one). Each gives a probability distribution over the taggings (conditioned on the unpunctuated sentence), so we can evaluate their perplexity. 14 Table 3 , our full model beats the baselines in perplexity in all 5 languages. Also, in 4 of 5 languages, allowing a trained NOISYCHANNEL (rather than the identity map) replacing the left one with \" and the right one with \".", "cite_spans": [], "ref_spans": [ { "start": 1240, "end": 1247, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Intrinsic Evaluation of the Model", "sec_num": "4" }, { "text": "13 For symmetry, we should also have added a final mark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results. As shown in", "sec_num": null }, { "text": "14 These methods learn word embeddings that optimize conditional log-likelihood on the punctuation restoration training data. They might do better if these embeddings were shared with other tasks, as multi-task learning might lead them to discover syntactic categories of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results. As shown in", "sec_num": null }, { "text": "Attn. CRF ATTACH +NC DIR Arabic 1.4676 1.3016 1.2230 1.1526 L Chinese 1.6850 1.4436 1.1921 1.1464 L English 1.5737 1.5247 1.5636 1.4276 R Hindi 1.1201 1.1032 1.0630 1.0598 L Spanish 1.4397 1.3198 1.2364 1.2103 R Table 3 : Results of the conditional perplexity experiment ( \u00a74), reported as perplexity per punctuation slot, where an unpunctuated sentence of n words has n + 1 slots. Column \"Attn.\" is the BiGRU tagger with attention, and \"CRF\" stands for the BiLSTM-CRF tagger. \"ATTACH\" is the ablated version of our model where surface punctuation is directly attached to the nodes. Our full model \"+NC\" adds NOISYCHANNEL to transduce the attached punctuation into surface punctuation. DIR is the learned direction ( \u00a7 2.2) of our full model's noisy channel PFST: Left-to-right or Right-to-left. Our models are given oracle parse trees T . The best perplexity is boldfaced, along with all results that are not significantly worse (paired permutation test, p < 0.05).", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results. As shown in", "sec_num": null }, { "text": "significantly improves the perplexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results. As shown in", "sec_num": null }, { "text": "We study our learned probability distribution over noisy channel rules (ab 7 ! b, ab 7 ! a, ab 7 ! ab, ab7 !ba) for English. The probability distributions corresponding to six of Nunberg's English rules are shown in Figure 3 . By comparing the orange and blue bars, observe that the model trained on the en_cesl treebank learned different quotation rules from the one trained on the en treebank. This is because en_cesl follows British style, whereas en has American-style quote transposition. 15 We now focus on the model learned from the en treebank. Nunberg's rules are deterministic, and our noisy channel indeed learned low-entropy rules, in the sense that for an input ab with underlying count 25, 16 at least one of the possible outputs (a, b, ab or ba) always has probability > 0.75. The one exception is \". 7 ! .\" for which the argmax output has probability \u21e1 0.5, because writers do not apply this quote transposition rule consistently. As shown by the blue bars in Figure 3, the high-probability transduction rules are consistent with Nunberg's hand-crafted deterministic grammar in Table 1 .", "cite_spans": [ { "start": 494, "end": 496, "text": "15", "ref_id": null } ], "ref_spans": [ { "start": 216, "end": 224, "text": "Figure 3", "ref_id": "FIGREF0" }, { "start": 976, "end": 982, "text": "Figure", "ref_id": null }, { "start": 1094, "end": 1101, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Rules Learned from the Noisy Channel", "sec_num": "5.1" }, { "text": "Our system has high precision when we look at the confident rules. Of the 24 learned edits with conditional probability > 0.75, Nunberg lists 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rules Learned from the Noisy Channel", "sec_num": "5.1" }, { "text": "Our system also has good recall. Nunberg's hand-crafted schemata consider 16 punctuation types and generate a total of 192 edit rules, including the specimens in Table 1 . That is, of the 16 2 = 256 possible underlying punctuation bigrams ab, 3 4 are supposed to undergo absorption or transposition. Our method achieves fairly high recall, in the sense that when Nunberg proposes ab7 ! , our learned p( | ab) usually ranks highly among all probabilities of the form p( 0 | ab). 75 of Nunberg's rules got rank 1, 48 got rank 2, and the remaining 69 got rank > 2. The mean reciprocal rank was 0.621. Recall is quite high when we restrict to those Nunberg rules ab 7 ! for which our model is confident how to rewrite ab, in the sense that some p( 0 | ab) > 0.5. (This tends to eliminate rare ab: see footnote 5.) Of these 55 Nunberg rules, 38 rules got rank 1, 15 got rank 2, and only 2 got rank worse than 2. The mean reciprocal rank was 0.836. \u00bfWhat about Spanish? Spanish uses inverted question marks \u00bf and exclamation marks \u00a1, which form symmetric pairs with the regular question marks and exclamation marks. If we try to extrapolate to Spanish from Nunberg's English for-malization, the English mark most analogous to \u00bf is (. Our learned noisy channel for Spanish (not graphed here) includes the high-probability rules ,\u00bf 7 ! ,\u00bf and :\u00bf 7 ! :\u00bf and \u00bf, 7 ! \u00bf which match Nunberg's treatment of ( in English.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 169, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Rules Learned from the Noisy Channel", "sec_num": "5.1" }, { "text": "What does our model learn about how dependency relations are marked by underlying punctuation?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attachment Model", "sec_num": "5.2" }, { "text": ",Earlier, Kerry said ,\" ... ,in fact, answer the question\". Earlier, Kerry said ,\" ... ,in fact, answer the question.\" root. ,advmod, ,\"ccomp\" ,nmod,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attachment Model", "sec_num": "5.2" }, { "text": "The above example 17 illustrates the use of specific puncteme pairs to set off the advmod, ccomp, and nmod relations. Notice that said takes a complement (ccomp) that is symmetrically quoted but also left delimited by a comma, which is indeed how direct speech is punctuated in English. This example also illustrates quotation transposition. The top five relations that are most likely to generate symmetric punctemes and their top (l, r) pairs are shown in Table 4 . The above example 18 shows how our model handles commas in conjunctions of 2 or more phrases. UD format dictates that each conjunct after the first is attached by the conj relation. As shown above, each such conjunct is surrounded by underlying commas (via the N.,.,.conj feature from Appendix A), except for the one that bears the conjunction and (via an even stronger weight on the C.\u270f.\u270f. ! conj.cc feature). Our learned feature weights indeed yield p(`= \u270f, r = \u270f) > 0.5 for the final conjunct in this example. Some writers omit the \"Oxford comma\" before the conjunction: this style can be achieved simply by changing \"surrounded\" to \"preceded\" (that is, changing the N feature to N.,.\u270f.conj).", "cite_spans": [], "ref_spans": [ { "start": 458, "end": 465, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Attachment Model", "sec_num": "5.2" }, { "text": "We evaluate the trained punctuation model by using it in the following three tasks. \u270f 18.1 , , 22. 3 , , 21.2 \" \" 2.4 ( ) 13.0 -\u270f 15.9 , \u270f 5.3 \u270f , 3.1 , , 2.4 -\u270f 9.7 \u270f \u270f 14.4 < > 3.0 ( ) 0.74 :\" \" 0.9 : \u270f 8.1 ( ) 13.1 ( ) 3.0 \u270f -0.21 \" ,\" 0.8 Table 4 : The top 5 relations that are most likely to generate symmetric punctemes, the entropy of their puncteme pair (row 2), and their top 5 puncteme pairs (rows 3-7) with their probabilities shown as percentages. The symmetric punctemes are in boldface.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 98, "text": "\u270f 18.1 , , 22.", "ref_id": null }, { "start": 243, "end": 250, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Performance on Extrinsic Tasks", "sec_num": "6" }, { "text": "In this task, we are given a depunctuated sentenc\u0113 d 19 and must restore its (surface) punctuation. Our model supposes that the observed punctuated sentencex would have arisen via the generative process (1). Thus, we try to find T , T 0 , andx that are consistent withd (a partial observation ofx). The first step is to reconstruct T fromd. This initial parsing step is intended to choose the T that maximizes p syn (T |d). 20 This step depends only on p syn and not on our punctuation model (p \u2713 , p ). In practice, we choose T via a dependency parser that has been trained on an unpunctuated treebank with examples of the form (d, T ). 21 Equation 2now defines a distribution over (T 0 , x) given this T . To obtain a single prediction for x, we adopt the minimum Bayes risk (MBR) approach of choosing surface punctuationx that minimizes the expected loss with respect to the unknown truth x \u21e4 . Our loss function is the total edit distance over all slots (where edits operate on punctuation tokens). Findingx exactly would be intractable, so we use a sampling-based approximation and draw m = 1000 samples from the posterior distribution over (T 0 , x). We then defin\u00ea", "cite_spans": [ { "start": 638, "end": 640, "text": "21", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "x = argmin x2S(T ) X x \u21e4 2S(T )p (x \u21e4 |T ) \u2022 loss(x, x \u21e4 ) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "where S(T ) is the set of unique x values in the sample andp is the empirical distribution given by the sample. This can be evaluated in O(m 2 ) time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "We evaluate on Arabic, English, Chinese, Hindi, and Spanish. For each language, we train both the parser and the punctuation model on the training split of that UD treebank ( \u00a74), and evaluate on held-out data. We compare to the BiLSTM-CRF baseline in \u00a74 (Xu et al., 2016) . 22 We also compare to a \"trivial\" deterministic baseline, which merely places a period at the end of the sentence (or a \"|\" in the case of Hindi) and adds no other punctuation. Because most slots do not in fact have punctuation, the trivial baseline already does very well; to improve on it, we must fix its errors without introducing new ones.", "cite_spans": [ { "start": 255, "end": 272, "text": "(Xu et al., 2016)", "ref_id": "BIBREF63" }, { "start": 275, "end": 277, "text": "22", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "Our final comparison on test data is shown in the table in Figure 4 . On all 5 languages, our method beats (usually significantly) its 3 competitors: the trivial deterministic baseline, the BiLSTM-CRF, and the ablated version of our model (ATTACH) that omits the noisy channel.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "Of course, the success of our method depends on the quality of the parse trees T (which is particularly low for Chinese and Arabic). The graph in Figure 4 explores this relationship, by evaluating (on dev data) with noisier trees obtained from parsers that were variously trained on only the first 10%, 20%, . . . of the training data. On all 5 languages, provided that the trees are at least 75% correct, our punctuation model beats both the trivial baseline and the BiLSTM-CRF (which do not use trees). It also beats the ATTACH ablation baseline at all levels of tree accuracy (these curves are omitted from the graph to avoid clutter). In all languages, better parses give better performance, and gold trees yield the best results.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 154, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Punctuation Restoration", "sec_num": "6.1" }, { "text": "Our next goal is to correct punctuation errors in a learner corpus. Each sentence is drawn from the Cambridge Learner Corpus treebanks, which provide original (en_esl) and corrected (en_cesl) sentences. All kinds of errors are corrected, such 22 We copied their architecture exactly but re-tuned the hyperparameters on our data. We also tried tripling the amount of training data by adding unannotated sentences (provided along with the original annotated sentences by Ginter et al. (2017) ), taking advantage of the fact that the BiLSTM-CRF does not require its training sentences to be annotated with trees. However, this actually hurt performance slightly, perhaps because the additional sentences were out-of-domain. We also tried the BiGRU-with-attention architecture of Tilk and Alum\u00e4e (2016) , but it was also weaker than the BiLSTM-CRF (just as in Table 3 ). We omit all these results from Figure 4 : Edit distance per slot (which we call average edit distance, or AED) for each of the 5 corpora. Lower is better. The table gives the final AED on the test data. Its first 3 columns show the baseline methods just as in Table 3 : the trivial deterministic method, the BiLSTM-CRF, and the ATTACH ablation baseline that attaches the surface punctuation directly to the tree. Column 4 is our method that incorporates a noisy channel, and column 5 (in gray) is our method using oracle (gold) trees. We boldface the best non-oracle result as well as all that are not significantly worse (paired permutation test, p < 0.05). The curves show how our method's AED (on dev data) varies with the labeled attachment score (LAS) of the trees, where --a at x = 100 uses the oracle (gold) trees, a--at x < 100 uses trees from our parser trained on 100% of the training data, and the #--points at x \u2327 100 use increasingly worse parsers. The p and 8 at the right of the graph show the AED of the trivial deterministic baseline and the BiLSTM-CRF baseline, which do not use trees.", "cite_spans": [ { "start": 243, "end": 245, "text": "22", "ref_id": null }, { "start": 469, "end": 489, "text": "Ginter et al. (2017)", "ref_id": "BIBREF19" }, { "start": 776, "end": 798, "text": "Tilk and Alum\u00e4e (2016)", "ref_id": "BIBREF58" } ], "ref_spans": [ { "start": 856, "end": 863, "text": "Table 3", "ref_id": null }, { "start": 898, "end": 906, "text": "Figure 4", "ref_id": null }, { "start": 1127, "end": 1134, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "as syntax errors, but we use only the 30% of sentences whose depunctuated trees T are isomorphic between en_esl and en_cesl. These en_cesl trees may correct word and/or punctuation errors in en_esl, as we wish to do automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "We assume that an English learner can make mistakes in both the attachment and the noisy channel steps. A common attachment mistake is the failure to surround a non-restrictive relative clause with commas. In the noisy channel step, mistakes in quote transposition are common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "Correction model. Based on the assumption about the two error sources, we develop a discriminative model for this task. Letx e denote the full input sentence, and let x e and x c denote the input (possibly errorful) and output (corrected) punctuation sequences. We model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "p(x c |x e ) = P T P T 0 c p syn (T |x e ) \u2022 p \u2713 (T 0 c | T, x e ) \u2022 p (x c | T 0 c ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "Here T is the depunctuated parse tree, T 0 c is the corrected underlying tree, T 0 e is the error underlying tree, and we assume p \u2713 (T 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "c | T, x e ) = P T 0 e p(T 0 e | T, x e ) \u2022 p \u2713 (T 0 c | T 0 e ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "In practice we use a 1-best pipeline rather than summing. Our first step is to reconstruct T from the error sentencex e . We choose T that maximizes p syn (T |x e ) from a dependency parser trained on en_esl treebank examples (x e , T ). The second step is to reconstruct T 0 e based on our punctuation model trained on en_esl. We choose T 0 e that maximizes p(T 0 e | T, x e ). We then reconstruct T 0 c by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(T 0 c | T 0 e ) = Q we2T 0 e p(l, r | w e )", "eq_num": "(11)" } ], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "where w e is the node in T 0 e , and p(l, r | w e ) is a similar log-linear model to equation (4) with additional features (Appendix C 4 ) which look at w e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "Finally, we reconstruct x c based on the noisy channel p (x c | T 0 c ) in \u00a7 2.2. During training, is regularized to be close to the noisy channel parameters in the punctuation model trained on en_cesl.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "We use the same MBR decoder as in \u00a7 6.1 to choose the best action. We evaluate using AED as in \u00a7 6.1. As a second metric, we use the script from the CoNLL 2014 Shared Task on Grammatical Error Correction (Ng et al., 2014) : it computes the F 0.5 -measure of the set of edits found by the system, relative to the true set of edits.", "cite_spans": [ { "start": 175, "end": 221, "text": "Grammatical Error Correction (Ng et al., 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "As shown in Table 5 , our method achieves better performance than the punctuation restoration baselines (which ignore input punctuation). On the other hand, it is soundly beaten by a new BiLSTM-CRF that we trained specifically for the task of punctuation correction. This is the same as the BiLSTM-CRF in the previous section, except that the BiLSTM now reads a punctuated input sentence (with possibly erroneous punctuation). To be precise, at step 0 \uf8ff i \uf8ff n, the BiL-STM reads a concatenation of the embedding of word i (or BOS if i = 0) with an embedding of the punctuation token sequence x i . The BiLSTM-CRF wins because it is a discriminative model tailored for this task: the BiLSTM can extract arbitrary contextual features of slot i that are correlated with whether x i is correct in context.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Punctuation Correction", "sec_num": "6.2" }, { "text": "We suspect that syntactic transformations on a sentence should often preserve the underlying punctuation attached to its tree. The surface punctuation can then be regenerated from the transformed tree. Such transformations include edits that are suggested by a writing assistance tool (Heidorn, 2000) , or subtree deletions in compressive summarization (Knight and Marcu, 2002) . p 8 a--parsed gold 8 -corr AED 0.052 0.051 0.047 0.034 0.033 0.005 F 0.5 0.779 0.787 0.827 0.876 0.881 0.984 Table 5 : AED and F 0.5 results on the test split of English-ESL data. Lower AED is better; higher F 0.5 is better. The first three columns (markers correspond to Figure 4) are the punctuation restoration baselines, which ignore the input punctuation. The fourth and fifth columns are our correction models, which use parsed and gold trees. The final column is the BiLSTM-CRF model tailored for the punctuation correction task.", "cite_spans": [ { "start": 285, "end": 300, "text": "(Heidorn, 2000)", "ref_id": "BIBREF22" }, { "start": 353, "end": 377, "text": "(Knight and Marcu, 2002)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 489, "end": 496, "text": "Table 5", "ref_id": null }, { "start": 652, "end": 661, "text": "Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Sentential Rephrasing", "sec_num": "6.3" }, { "text": "For our experiment, we evaluate an interesting case of syntactic transformation. Wang and Eisner (2016) consider a systematic rephrasing procedure by rearranging the order of dependent subtrees within a UD treebank, in order to synthesize new languages with different word order that can then be used to help train multi-lingual systems (i.e., data augmentation with synthetic data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentential Rephrasing", "sec_num": "6.3" }, { "text": "As Wang and Eisner acknowledge (2016, footnote 9), their permutations treat surface punctuation tokens like ordinary words, which can result in synthetic sentences whose punctuation is quite unlike that of real languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentential Rephrasing", "sec_num": "6.3" }, { "text": "In our experiment, we use Wang and Eisner's (2016) \"self-permutation\" setting, where the dependents of each noun and verb are stochastically reordered, but according to a dependent ordering model that has been trained on the same language. which is still grammatical except that , and . are wrongly swapped (after all, they have the same POS tag and relation type). Worse, permutation may yield bizarre punctuation such as , , at the start of a sentence. Our punctuation model gives a straightforward remedy-instead of permuting the tree directly, we first discover its most likely underlying tre\u00ea , If true, the caper failed . by the maximizing variant of Algorithm 1 ( \u00a7 3.1). Then, we permute the underlying tree and sample the surface punctuation from the distribution modeled by the trained PFST, yielding Table 6 : Perplexity (evaluated on the train split to avoid evaluating generalization) of a trigram language model trained (with add-0.001 smoothing) on different versions of rephrased training sentences. \"Punctuation\" only evaluates perplexity on the trigrams that have punctuation. \"All\" evaluates on all the trigrams. \"Base\" permutes all surface dependents including punctuation (Wang and Eisner, 2016) . \"Full\" is our full approach: recover underlying punctuation, permute remaining dependents, regenerate surface punctuation. \"Half\" is like \"Full\" but it permutes the nonpunctuation tokens identically to \"Base.\" The permutation model is trained on surface trees or recovered underlying trees T 0 , respectively. In each 3-way comparison, we boldface the best result (always significant under a paired permutation test over per-sentence logprobabilities, p < 0.05).", "cite_spans": [ { "start": 1193, "end": 1216, "text": "(Wang and Eisner, 2016)", "ref_id": "BIBREF61" } ], "ref_spans": [ { "start": 811, "end": 818, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Sentential Rephrasing", "sec_num": "6.3" }, { "text": "the caper failed ,If true ,. the caper failed ,If true . We leave the handling of capitalization to future work. We test the naturalness of the permuted sentences by asking how well a word trigram language model trained on them could predict the original sentences. 23 As shown in Table 6 , our permutation approach reduces the perplexity over the baseline on 4 of the 5 languages, often dramatically.", "cite_spans": [ { "start": 266, "end": 268, "text": "23", "ref_id": null } ], "ref_spans": [ { "start": 281, "end": 288, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Sentential Rephrasing", "sec_num": "6.3" }, { "text": "Punctuation can aid syntactic analysis, since it signals phrase boundaries and sentence structure. Briscoe (1994) and White and Rajkumar (2008) parse punctuated sentences using hand-crafted constraint-based grammars that implement Nunberg's approach in a declarative way. These grammars treat surface punctuation symbols as ordinary words, but annotate the nonterminal categories so as to effectively keep track of the underlying punctuation. This is tantamount to crafting a grammar for underlyingly punctuated sentences and composing it with a finite-state noisy channel.", "cite_spans": [ { "start": 99, "end": 113, "text": "Briscoe (1994)", "ref_id": "BIBREF5" }, { "start": 118, "end": 143, "text": "White and Rajkumar (2008)", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The parser of Ma et al. (2014) takes a different approach and treats punctuation marks as features of their neighboring words. Zhang et al. (2013) use a generative model for punctuated sentences, leting them restore punctuation marks during transition-based parsing of unpunctuated sentences. Li et al. (2005) use punctuation marks to segment a sentence: this \"divide and rule\" strategy reduces ambiguity in parsing of long Chinese sentences. Punctuation can similarly be used to constrain syntactic structure during grammar induction (Spitkovsky et al., 2011) .", "cite_spans": [ { "start": 14, "end": 30, "text": "Ma et al. (2014)", "ref_id": "BIBREF37" }, { "start": 127, "end": 146, "text": "Zhang et al. (2013)", "ref_id": "BIBREF65" }, { "start": 293, "end": 309, "text": "Li et al. (2005)", "ref_id": "BIBREF33" }, { "start": 535, "end": 560, "text": "(Spitkovsky et al., 2011)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "Punctuation restoration ( \u00a7 6.1) is useful for transcribing text from unpunctuated speech. The task is usually treated by tagging each slot with zero or more punctuation tokens, using a traditional sequence labeling method: conditional random fields (Lui and Wang, 2013; Lu and Ng, 2010) , recurrent neural networks (Tilk and Alum\u00e4e, 2016) , or transition-based systems (Ballesteros and Wanner, 2016) .", "cite_spans": [ { "start": 250, "end": 270, "text": "(Lui and Wang, 2013;", "ref_id": "BIBREF36" }, { "start": 271, "end": 287, "text": "Lu and Ng, 2010)", "ref_id": "BIBREF35" }, { "start": 316, "end": 339, "text": "(Tilk and Alum\u00e4e, 2016)", "ref_id": "BIBREF58" }, { "start": 370, "end": 400, "text": "(Ballesteros and Wanner, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "We have provided a new computational approach to modeling punctuation. In our model, syntactic constituents stochastically generate latent underlying left and right punctemes. Surface punctuation marks are not directly attached to the syntax tree, but are generated from sequences of adjacent punctemes by a (stochastic) finite-state string rewriting process . Our model is inspired by Nunberg's (1990) formal grammar for English punctuation, but is probabilistic and trainable. We give exact algorithms for training and inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "We trained Nunberg-like models for 5 languages and L2 English. We compared the English model to Nunberg's, and showed how the trained models can be used across languages for punctuation restoration, correction, and adjustment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "In the future, we would like to study the usefulness of the recovered underlying trees on tasks such as syntactically sensitive sentiment analysis (Tai et al., 2015) , machine translation (Cowan et al., 2006) , relation extraction (Culotta and Sorensen, 2004) , and coreference resolution (Kong et al., 2010) . We would also like to investigate how underlying punctuation could aid parsing. For discriminative parsing, features for scoring the tree could refer to the underlying punctuation, not just the surface punctuation. For generative parsing ( \u00a73), we could follow the scheme in equation (1). For example, the p syn factor in equation (1) might be a standard recurrent neural network grammar (RNNG) (Dyer et al., 2016) ; when a subtree of T is completed by the REDUCE operation of p syn , the punctuationaugmented RNNG (1) would stochastically attach subtree-external left and right punctemes with p \u2713 and transduce the subtree-internal slots with p .", "cite_spans": [ { "start": 147, "end": 165, "text": "(Tai et al., 2015)", "ref_id": "BIBREF57" }, { "start": 188, "end": 208, "text": "(Cowan et al., 2006)", "ref_id": "BIBREF11" }, { "start": 231, "end": 259, "text": "(Culotta and Sorensen, 2004)", "ref_id": "BIBREF12" }, { "start": 289, "end": 308, "text": "(Kong et al., 2010)", "ref_id": "BIBREF28" }, { "start": 706, "end": 725, "text": "(Dyer et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "In the future, we are also interested in enriching the T 0 representation and making it more different from T , to underlyingly account for other phenomena in T such as capitalization, spacing, morphology, and non-projectivity (via reordering).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "8" }, { "text": "Our model could be easily adapted to work on constituency trees instead.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The appendices (supplementary material) are available at https://arxiv.org/abs/1906.11298.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Rather than learn a separate edit probability distribution for each bigram ab, one could share parameters across bigrams. For example, Table 1's caption says that \"stronger\" tokens tend to absorb \"weaker\" ones. A model that incor-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "N = O(# of punc types \u2022 max # of punc tokens per slot).10 Constructively, compose the u-to-x PFST (from the end of \u00a7 2.2) with a straight-line FSA accepting only xi, and project the resulting WFST to its input tape(Pereira and Riley, 1996), as explained at the end of Appendix D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "American style places commas and periods inside the quotation marks, even if they are not logically in the quote. British style (more sensibly) places unquoted periods and commas in their logical place, sometimes outside the quotation marks if they are not part of the quote.16 For rarer underlying pairs ab, the estimated distributions sometimes have higher entropy due to undertraining.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To depunctuate a treebank sentence, we remove all tokens with POS-tag PUNCT or dependency relation punct. These are almost always leaves; else we omit the sentence.20 Ideally, rather than maximize, one would integrate over possible trees T , in practice by sampling many values T k from psyn(\u2022 |\u016b) and replacing S(T ) in (10) with S k S(T k ). 21 Specifically, the Yara parser (Rasooli and Tetreault, 2015), a fast non-probabilistic transition-based parser that uses rich non-local features(Zhang and Nivre, 2011).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "So the two approaches to permutation yield different training data, but are compared fairly on the same test data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based upon work supported by the National Science Foundation under Grant Nos. 1423276 and 1718846, including a REU supplement to the first author. We are grateful to the state of Maryland for the Maryland Advanced Research Computing Center, a crucial resource. We thank Xiaochen Li for early discussion, Argo lab members for further discussion, and the three reviewers for quality comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentiment analysis of Twitter data", "authors": [ { "first": "Apoorv", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Boyi", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Ilia", "middle": [], "last": "Vovsha", "suffix": "" }, { "first": "Owen", "middle": [], "last": "Rambow", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Language in Social Media (LSM 2011)", "volume": "", "issue": "", "pages": "30--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sen- timent analysis of Twitter data. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 30-38.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A neural network architecture for multilingual punctuation generation", "authors": [ { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Wanner", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1048--1053", "other_ids": { "DOI": [ "10.18653/v1/D16-1111" ] }, "num": null, "urls": [], "raw_text": "Miguel Ballesteros and Leo Wanner. 2016. A neu- ral network architecture for multilingual punc- tuation generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1048-1053.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On formal properties of simple phrase structure grammars", "authors": [ { "first": "Yehoshua", "middle": [], "last": "Bar-Hillel", "suffix": "" }, { "first": "M", "middle": [], "last": "Perles", "suffix": "" }, { "first": "E", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 1961, "venue": "Language and Information: Selected Essays on their Theory and Application", "volume": "14", "issue": "", "pages": "116--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yehoshua Bar-Hillel, M. Perles, and E. Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift f\u00fcr Phonetik, Sprachwissenschaft und Kommunika- tionsforschung, 14:143-172. Reprinted in Y. Bar-Hillel (1964), Language and Information: Selected Essays on their Theory and Applica- tion, Addison-Wesley 1964, pages 116-150.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Rational Series and their Languages", "authors": [ { "first": "Jean", "middle": [], "last": "Berstel", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Reutenauer", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Berstel and Christophe Reutenauer. 1988. Rational Series and their Languages. Springer- Verlag.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bracketing guidelines for Treebank II style: Penn Treebank project", "authors": [ { "first": "Ann", "middle": [], "last": "Bies", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Ferguson", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Macintyre", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Tredinnick", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" }, { "first": "Britta", "middle": [], "last": "Schasberger", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Bies, Mark Ferguson, Karen Katz, Robert MacIntyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schas- berger. 1995. Bracketing guidelines for Tree- bank II style: Penn Treebank project. Technical Report MS-CIS-95-06, University of Pennsyl- vania.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing (with) punctuation, etc", "authors": [ { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted Briscoe. 1994. Parsing (with) punctuation, etc. Technical report, Xerox European Re- search Laboratory.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A fast and accurate dependency parser using neural networks", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "740--750", "other_ids": { "DOI": [ "10.3115/v1/D14-1082" ] }, "num": null, "urls": [], "raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neu- ral networks. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 740-750.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Sound Pattern of English", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" }, { "first": "Morris", "middle": [], "last": "Halle", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper and Row, New York.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stochastic contextual edit distance and probabilistic FSTs", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "625--630", "other_ids": { "DOI": [ "10.3115/v1/P14-2102" ] }, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 625-630.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Modeling word forms using latent underlying morphs and phonology", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Nanyun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "3", "issue": "", "pages": "433--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2015. Modeling word forms using latent under- lying morphs and phonology. Transactions of the Association for Computational Linguistics (TACL), 3:433-447.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A joint model of orthography and morphological segmentation", "authors": [ { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Vieira", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "664--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Cotterell, Tim Vieira, and Hinrich Sch\u00fctze. 2016. A joint model of orthography and mor- phological segmentation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 664-669.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A discriminative model for treeto-tree translation", "authors": [ { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Ivona", "middle": [], "last": "Ku\u010derov\u00e1", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "232--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brooke Cowan, Ivona Ku\u010derov\u00e1, and Michael Collins. 2006. A discriminative model for tree- to-tree translation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 232- 241.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dependency tree kernels for relation extraction", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. De- pendency tree kernels for relation extraction. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Efficient third-order dependency parsers", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 5th International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher Manning. 2017. Efficient third-order dependency parsers. In Proceedings of the 5th International Confer- ence on Learning Representations (ICLR).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Recurrent neural network grammars", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "199--209", "other_ids": { "DOI": [ "10.18653/v1/N16-1024" ] }, "num": null, "urls": [], "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Balles- teros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 199-209.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Three new probabilistic models for dependency parsing: An exploration", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "340--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 1996. Three new probabilistic mod- els for dependency parsing: An exploration. In Proceedings of the 16th International Confer- ence on Computational Linguistics (COLING), pages 340-345.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inside-outside and forwardbackward algorithms are just backprop", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the EMNLP Workshop on Structured Prediction for NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2016. Inside-outside and forward- backward algorithms are just backprop. In Pro- ceedings of the EMNLP Workshop on Struc- tured Prediction for NLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "String processing languages and generalized Markov algorithms", "authors": [ { "first": "A", "middle": [], "last": "Caracciolo Di Forino", "suffix": "" } ], "year": 1968, "venue": "editor, Symbol Manipulation Languages and Techniques", "volume": "", "issue": "", "pages": "191--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Caracciolo di Forino. 1968. String process- ing languages and generalized Markov algo- rithms. In D. G. Bobrow, editor, Symbol Manip- ulation Languages and Techniques, pages 191- 206. North-Holland Publishing Company, Am- sterdam.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Posterior regularization for structured latent variable models", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "2001--2049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2010. Posterior regularization for struc- tured latent variable models. Journal of Ma- chine Learning Research, 11:2001-2049.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", "authors": [ { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Ginter, Jan Haji\u010d, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017. CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIN dig- ital library at the Institute of Formal and Ap- plied Linguistics (\u00daFAL), Faculty of Mathe- matics and Physics, Charles University.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An efficient algorithm for easy-first non-directional dependency parsing", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)", "volume": "", "issue": "", "pages": "742--750", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 742-750.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semiring parsing. Computational Linguistics", "authors": [ { "first": "Joshua", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1999, "venue": "", "volume": "25", "issue": "", "pages": "573--605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshua Goodman. 1999. Semiring parsing. Com- putational Linguistics, 25(4):573-605.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Intelligent writing assistance", "authors": [ { "first": "George", "middle": [], "last": "Heidorn", "suffix": "" } ], "year": 2000, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "181--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Heidorn. 2000. Intelligent writing assis- tance. In Robert Dale, Herman Moisl, and Harold Somers, editors, Handbook of Natural Language Processing, pages 181-207. Marcel Dekker, New York.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Formal Aspects of Phonological Description", "authors": [ { "first": "C", "middle": [], "last": "", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. Mouton.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Exploring the role of punctuation in parsing natural text", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Bernard", "suffix": "" }, { "first": "", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1994, "venue": "The 15th International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernard E. M. Jones. 1994. Exploring the role of punctuation in parsing natural text. In COLING 1994 Volume 1: The 15th International Confer- ence on Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceed- ings of the International Conference on Learn- ing Representations (ICLR).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations", "authors": [ { "first": "Eliyahu", "middle": [], "last": "Kiperwasser", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "4", "issue": "", "pages": "313--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing us- ing bidirectional LSTM feature representations. Transactions of the Association for Computa- tional Linguistics (TACL), 4:313-327.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Summarization beyond sentence extraction: A probabilistic approach to sentence compression", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2002, "venue": "Artificial Intelligence", "volume": "139", "issue": "1", "pages": "91--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight and Daniel Marcu. 2002. Summa- rization beyond sentence extraction: A proba- bilistic approach to sentence compression. Ar- tificial Intelligence, 139(1):91-107.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Dependency-driven anaphoricity determination for coreference resolution", "authors": [ { "first": "Fang", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Longhua", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "599--607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fang Kong, Guodong Zhou, Longhua Qian, and Qiaoming Zhu. 2010. Dependency-driven anaphoricity determination for coreference res- olution. In Proceedings of the 23rd Interna- tional Conference on Computational Linguis- tics (COLING), pages 599-607.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Efficient third-order dependency parsers", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1- 11.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A New Paradigm for Punctuation", "authors": [ { "first": "Albert", "middle": [ "E" ], "last": "Krahn", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albert E. Krahn. 2014. A New Paradigm for Punctuation. Ph.D. thesis, The University of Wisconsin-Milwaukee.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Low-rank tensors for scoring dependency structures", "authors": [ { "first": "Tao", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Xin", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "1381--1391", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceed- ings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (ACL), pages 1381-1391.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A noisy-channel model of human sentence comprehension under uncertain input", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "234--243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Levy. 2008. A noisy-channel model of human sentence comprehension under uncer- tain input. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 234-243.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A hierarchical parsing approach with punctuation processing for long Chinese sentences", "authors": [ { "first": "Xing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" }, { "first": "Rile", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Li, Chengqing Zong, and Rile Hu. 2005. A hierarchical parsing approach with punctuation processing for long Chinese sentences. In Pro- ceedings of the International Joint Conference on Natural Language Processing (IJCNLP).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "First-and second-order expectation semirings with applications to minimum-risk training on translation forests", "authors": [ { "first": "Zhifei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "40--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second-order expectation semirings with appli- cations to minimum-risk training on translation forests. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 40-51.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Better punctuation prediction with dynamic conditional random fields", "authors": [ { "first": "Wei", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "177--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Lu and Hwee Tou Ng. 2010. Better punctu- ation prediction with dynamic conditional ran- dom fields. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 177-186.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Recovering casing and punctuation using conditional random fields", "authors": [ { "first": "Marco", "middle": [], "last": "Lui", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Australasian Language Technology Association Workshop (ALTA)", "volume": "", "issue": "", "pages": "137--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Lui and Li Wang. 2013. Recovering casing and punctuation using conditional ran- dom fields. In Proceedings of the Australasian Language Technology Association Workshop (ALTA), pages 137-141.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Punctuation processing for projective dependency parsing", "authors": [ { "first": "Ji", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "791--796", "other_ids": { "DOI": [ "10.3115/v1/P14-2128" ] }, "num": null, "urls": [], "raw_text": "Ji Ma, Yue Zhang, and Jingbo Zhu. 2014. Punc- tuation processing for projective dependency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 791-796.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Generalized expectation criteria for semisupervised learning with weakly labeled data", "authors": [ { "first": "Gideon", "middle": [ "S" ], "last": "Mann", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "955--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon S. Mann and Andrew McCallum. 2010. Generalized expectation criteria for semi- supervised learning with weakly labeled data. Journal of Machine Learning Research, 11:955-984.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The theory of algorithms", "authors": [ { "first": "Andrey Andreevich", "middle": [], "last": "Markov", "suffix": "" } ], "year": 1960, "venue": "", "volume": "2", "issue": "", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrey Andreevich Markov. 1960. The theory of algorithms. American Mathematical Society Translations, series 2(15):1-14.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Punctuation as native language interference", "authors": [ { "first": "Ilia", "middle": [], "last": "Markov", "suffix": "" }, { "first": "Vivi", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "3456--3466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilia Markov, Vivi Nastase, and Carlo Strappar- ava. 2018. Punctuation as native language in- terference. In Proceedings of the 27th Inter- national Conference on Computational Linguis- tics (COLING), pages 3456-3466.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Computing Research Repository (CoRR)", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. Computing Re- search Repository (CoRR), arXiv:1301.3781.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Probabilistic parsing as intersection", "authors": [ { "first": "Jan", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Nederhof", "suffix": "" }, { "first": "", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2003, "venue": "8th International Workshop on Parsing Technologies (IWPT)", "volume": "", "issue": "", "pages": "137--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark-Jan Nederhof and Giorgio Satta. 2003. Probabilistic parsing as intersection. In 8th In- ternational Workshop on Parsing Technologies (IWPT), pages 137-148.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "The CoNLL-2014 shared task on grammatical error correction", "authors": [ { "first": "", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Mei", "middle": [], "last": "Siew", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Raymond", "middle": [ "Hendy" ], "last": "Hadiwinoto", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Susanto", "suffix": "" }, { "first": "", "middle": [], "last": "Bryant", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.3115/v1/W14-1701" ] }, "num": null, "urls": [], "raw_text": "Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Chris- tian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LINDAT/CLARIN digital library at the In- stitute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University. Data available at http: //universaldependencies.org.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "The CoNLL 2007 shared task on dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "915--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Sandra K\u00fcbler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007a. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP- CoNLL 2007, pages 915-932.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Maltparser: A language-independent system for data-driven dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "Atanas", "middle": [], "last": "Chanev", "suffix": "" }, { "first": "G\u00fcl\u015fen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2007, "venue": "Natural Language Engineering", "volume": "13", "issue": "2", "pages": "95--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fcl\u015fen Eryigit, Sandra K\u00fcbler, Sve- toslav Marinov, and Erwin Marsi. 2007b. Malt- parser: A language-independent system for data-driven dependency parsing. Natural Lan- guage Engineering, 13(2):95-135.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Universal dependencies annotation guidelines. Available at universaldependencies.org", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre et al. 2018. Universal depen- dencies annotation guidelines. Available at universaldependencies.org.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The Linguistics of Punctuation", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Nunberg", "suffix": "" } ], "year": 1990, "venue": "Number 18 in CSLI Lecture Notes. Center for the Study of Language and Information", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Nunberg. 1990. The Linguistics of Punc- tuation. Number 18 in CSLI Lecture Notes. Center for the Study of Language and Informa- tion.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Thumbs up? Sentiment classification using machine learning techniques", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Shivakumar", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Senti- ment classification using machine learning techniques. In Proceedings of the 2002", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Language Processing (EMNLP 2002).", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Speech recognition by composition of weighted finite automata", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "Michael", "middle": [ "D" ], "last": "Pereira", "suffix": "" }, { "first": "", "middle": [], "last": "Riley", "suffix": "" } ], "year": 1996, "venue": "Computing Research Repository (CoRR)", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:cmp-lg/9603001" ] }, "num": null, "urls": [], "raw_text": "Fernando C. N. Pereira and Michael D. Riley. 1996. Speech recognition by composition of weighted finite automata. Computing Research Repository (CoRR), arXiv:cmp-lg/9603001.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Yara parser: A fast and accurate dependency parser", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh Rasooli", "suffix": "" }, { "first": "Joel", "middle": [ "R" ], "last": "Tetreault", "suffix": "" } ], "year": 2015, "venue": "Computing Research Repository", "volume": "", "issue": "2", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.06733" ] }, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Joel R. Tetreault. 2015. Yara parser: A fast and accurate depen- dency parser. Computing Research Repository, arXiv:1503.06733 (version 2).", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Software framework for topic modelling with large corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software framework for topic modelling with large cor- pora. In Proceedings of the LREC 2010 Work- shop on New Challenges for NLP Frameworks, pages 45-50.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Punctuation: Making a point in unsupervised dependency parsing", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Hiyan", "middle": [], "last": "Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning, CoNLL '11", "volume": "", "issue": "", "pages": "19--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2011. Punctuation: Making a point in unsupervised dependency parsing. In Proceed- ings of the Fifteenth Conference on Computa- tional Natural Language Learning, CoNLL '11, pages 19-28.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Improved semantic representations from tree-structured long shortterm memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-COLING)", "volume": "", "issue": "", "pages": "1556--1566", "other_ids": { "DOI": [ "10.3115/v1/P15-1150" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christo- pher D. Manning. 2015. Improved semantic representations from tree-structured long short- term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (ACL-COLING), pages 1556-1566.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Bidirectional recurrent neural network with attention mechanism for punctuation restoration", "authors": [ { "first": "Ottokar", "middle": [], "last": "Tilk", "suffix": "" }, { "first": "Tanel", "middle": [], "last": "Alum\u00e4e", "suffix": "" } ], "year": 2016, "venue": "terspeech", "volume": "", "issue": "", "pages": "3047--3051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2016. Bidirec- tional recurrent neural network with attention mechanism for punctuation restoration. In In- terspeech, pages 3047-3051.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Unsupervised neural hidden Markov models", "authors": [ { "first": "M", "middle": [], "last": "Ke", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Workshop on Structured Prediction for NLP", "volume": "", "issue": "", "pages": "63--71", "other_ids": { "DOI": [ "10.18653/v1/W16-5907" ] }, "num": null, "urls": [], "raw_text": "Ke M. Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsu- pervised neural hidden Markov models. In Pro- ceedings of the Workshop on Structured Predic- tion for NLP, pages 63-71.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "The Chicago Manual of Style", "authors": [], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "University of Chicago. 2010. The Chicago Man- ual of Style. University of Chicago Press.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Transactions of the Association for Computational Linguistics (TACL)", "authors": [ { "first": "Dingquan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2016, "venue": "", "volume": "4", "issue": "", "pages": "491--505", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dingquan Wang and Jason Eisner. 2016. The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Transac- tions of the Association for Computational Lin- guistics (TACL), 4:491-505.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "A more precise analysis of punctuation for broad-coverage surface realization with CCG", "authors": [ { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajakrishnan", "middle": [], "last": "Rajkumar", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the COLING 2008 Workshop on Grammar Engineering Across Frameworks", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael White and Rajakrishnan Rajkumar. 2008. A more precise analysis of punctuation for broad-coverage surface realization with CCG. In Proceedings of the COLING 2008 Workshop on Grammar Engineering Across Frameworks, pages 17-24.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Investigating LSTM for punctuation prediction", "authors": [ { "first": "K", "middle": [], "last": "Xu", "suffix": "" }, { "first": "L", "middle": [], "last": "Xie", "suffix": "" }, { "first": "K", "middle": [], "last": "Yao", "suffix": "" } ], "year": 2016, "venue": "2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP)", "volume": "", "issue": "", "pages": "1--5", "other_ids": { "DOI": [ "10.1109/ISCSLP.2016.7918492" ] }, "num": null, "urls": [], "raw_text": "K. Xu, L. Xie, and K. Yao. 2016. Investigat- ing LSTM for punctuation prediction. In 2016 10th International Symposium on Chinese Spo- ken Language Processing (ISCSLP), pages 1-5.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Phrase-based statistical machine translation", "authors": [ { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Franz", "middle": [ "Josef" ], "last": "Och", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Annual Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "18--32", "other_ids": { "DOI": [ "https://link.springer.com/chapter/10.1007/3-540-45751-8_2" ] }, "num": null, "urls": [], "raw_text": "Richard Zens, Franz Josef Och, and Hermann Ney. 2002. Phrase-based statistical machine transla- tion. In Annual Conference on Artificial Intelli- gence, pages 18-32.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Punctuation prediction with transition-based parsing", "authors": [ { "first": "Dongdong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuangzhi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "752--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongdong Zhang, Shuangzhi Wu, Nan Yang, and Mu Li. 2013. Punctuation prediction with transition-based parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), pages 752- 760.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Transitionbased dependency parsing with rich non-local features", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", "volume": "", "issue": "", "pages": "188--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition- based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 188-193.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Rewrite probabilities learned for English, averaged over the last 4 epochs on en treebank (blue bars) or en_esl treebank (orange bars). The header above each figure is the underlying punctuation string (input to NOISYCHANNEL). The two counts in the figure headers are the number of occurrences of the underlying punctuation strings in the 1-best reconstruction of underlying punctuation sequences (by Algorithm 1) respectively in the en and en_esl treebank. Each bar represents one surface punctuation string (output of NOISYCHANNEL), its height giving the probability.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "Figure 4 to reduce clutter.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Arabic 0.064 0.064 0.063 0.059 0.053 Chinese 0.110 0.109 0.104 0.102 0.048 English 0.100 0.108 0.092 0.090 0.079 Hindi 0.025 0.023 0.019 0.018 0.013 Spanish 0.093 0.092 0.085 0.078 0.068", "uris": null }, "TABREF4": { "type_str": "table", "html": null, "num": null, "content": "
parataxis apposlistadvclccomp
2.382.291.330.770.53
, ,
17 [en] Earlier, Kerry said, \"Just because you
get an honorable discharge does not, in fact,
answer that question.\"
18 [en]Sections 1, 2, 5, 6, 7, and 8 will
survive any termination of this License.
", "text": "26.8 , , 18.8 \u270f \u270f 60.0 \u270f \u270f 73.8 \u270f \u270f 90.8 \u270f \u270f 20.1 :" }, "TABREF6": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Punctuation All Base Half Full Base Half Full Arabic 156.0 231.3 186.1 540.8 590.3 553.4 Chinese 165.2 110.0 61.4 205.0 174.4 78.7 English 98.4 74.5 51.0 140.9 131.4 75.4 Hindi 10.8 11.0 9.7 118.4 118.8 91.8 Spanish 266.2 259.2 194.5 346.3 343.4 239.3" } } } }