{ "paper_id": "N01-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:48:02.537352Z" }, "title": "Edit Detection and Parsing for Transcribed Speech", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "", "affiliation": { "laboratory": "Linguistic Sciences Brown Laboratory for Linguistic Information Processing (BLLIP", "institution": "Brown University", "location": { "postCode": "02912", "settlement": "Providence", "region": "RI" } }, "email": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": { "laboratory": "Linguistic Sciences Brown Laboratory for Linguistic Information Processing (BLLIP", "institution": "Brown University", "location": { "postCode": "02912", "settlement": "Providence", "region": "RI" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a simple architecture for parsing transcribed speech in which an edited-word detector first removes such words from the sentence string, and then a standard statistical parser trained on transcribed speech parses the remaining words. The edit detector achieves a misclassification rate on edited words of 2.2%. (The NULL-model, which marks everything as not edited, has an error rate of 5.9%.) To evaluate our parsing results we introduce a new evaluation metric, the purpose of which is to make evaluation of a parse tree relatively indifferent to the exact tree position of EDITED nodes. By this metric the parser achieves 85.3% precision and 86.5% recall.", "pdf_parse": { "paper_id": "N01-1016", "_pdf_hash": "", "abstract": [ { "text": "We present a simple architecture for parsing transcribed speech in which an edited-word detector first removes such words from the sentence string, and then a standard statistical parser trained on transcribed speech parses the remaining words. The edit detector achieves a misclassification rate on edited words of 2.2%. (The NULL-model, which marks everything as not edited, has an error rate of 5.9%.) To evaluate our parsing results we introduce a new evaluation metric, the purpose of which is to make evaluation of a parse tree relatively indifferent to the exact tree position of EDITED nodes. By this metric the parser achieves 85.3% precision and 86.5% recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "While significant effort has been expended on the parsing of written text, parsing speech has received relatively little attention. The comparative neglect of speech (or transcribed speech) is understandable, since parsing transcribed speech presents several problems absent in regular text: \"um\"s and \"ah\"s (or more formally, filled pauses), frequent use of parentheticals (e.g., \"you know\"), ungrammatical constructions, and speech repairs (e.g., \"Why didn't he, why didn't she stay home?\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we present and evaluate a simple two-pass architecture for handling the problems of parsing transcribed speech. The first pass tries to identify which of the words in the string are edited (\"why didn't he,\" in the above example). These words are removed from the string given to the second pass, an already existing statistical parser trained on a transcribed speech corpus. (In particular, all of the research in this paper was performed on the parsed \"Switchboard\" corpus as provided by the Linguistic Data Consortium.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This architecture is based upon a fundamental assumption: that the semantic and pragmatic content of an utterance is based solely on the unedited words in the word sequence. This assumption is not completely true. For example, Core and Schubert [8] point to counterexamples such as \"have the engine take the oranges to Elmira, um, I mean, take them to Corning\" where the antecedent of \"them\" is found in the EDITED words. However, we believe that the assumption is so close to true that the number of errors introduced by this assumption is small compared to the total number of errors made by the system.", "cite_spans": [ { "start": 245, "end": 248, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to evaluate the parser's output we compare it with the gold-standard parse trees. For this purpose a very simple third pass is added to the architecture: the hypothesized edited words are inserted into the parser output (see Section 3 for details). To the degree that our fundamental assumption holds, a \"real\" application would ignore this last step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This architecture has several things to recommend it. First, it allows us to treat the editing problem as a pre-process, keeping the parser unchanged. Second, the major clues in detecting edited words in transcribed speech seem to be relatively shallow phenomena, such as repeated word and part-of-speech sequences. The kind of information that a parser would add, e.g., the node dominating the EDITED node, seems much less critical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that of the major problems associated with transcribed speech, we choose to deal with only one of them, speech repairs, in a special fashion. Our reasoning here is based upon what one might and might not expect from a secondpass statistical parser. For example, ungrammaticality in some sense is relative, so if the training corpus contains the same kind of ungrammatical examples as the testing corpus, one would not expect ungrammaticality itself to be a show stopper. Furthermore, the best statistical parsers [3, 5] do not use grammatical rules, but rather define probability distributions over all possible rules.", "cite_spans": [ { "start": 518, "end": 521, "text": "[3,", "ref_id": "BIBREF2" }, { "start": 522, "end": 524, "text": "5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similarly, parentheticals and filled pauses exist in the newspaper text these parsers currently handle, albeit at a much lower rate. Thus there is no particular reason to expect these constructions to have a major impact. 1 This leaves speech repairs as the one major phenomenon not present in written text that might pose a major problem for our parser. It is for that reason that we have chosen to handle it separately.", "cite_spans": [ { "start": 222, "end": 223, "text": "1", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The organization of this paper follows the architecture just described. Section 2 describes the first pass. We present therein a boosting model for learning to detect edited nodes (Sections 2.1 -2.2) and an evaluation of the model as a stand-alone edit detector (Section 2.3). Section 3 describes the parser. Since the parser is that already reported in [3] , this section simply describes the parsing metrics used (Section 3.1), the details of the experimental setup (Section 3.2), and the results (Section 3.3).", "cite_spans": [ { "start": 354, "end": 357, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Switchboard corpus annotates disfluencies such as restarts and repairs using the terminology of Shriberg [15] . The disfluencies include repetitions and substitutions, italicized in (1a) and (1b) respectively.", "cite_spans": [ { "start": 109, "end": 113, "text": "[15]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Identifying EDITED words", "sec_num": "2" }, { "text": "(1) a. I really, I really like pizza. b. Why didn't he, why didn't she stay home?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying EDITED words", "sec_num": "2" }, { "text": "Restarts and repairs are indicated by disfluency tags '[', '+' and ']' in the disfluency POS-tagged Switchboard corpus, and by EDITED nodes in the tree-tagged corpus. This section describes a procedure for automatically identifying words corrected by a restart or repair, i.e., words that are dominated by an EDITED node in the treetagged corpus. This method treats the problem of identifying EDITED nodes as a word-token classification problem, where each word token is classified as either edited or not. The classifier applies to words only; punctuation inherits the classification of the preceding word. A linear classifier trained by a greedy boosting algorithm [16] is used to predict whether a word token is edited. Our boosting classifier is directly based on the greedy boosting algorithm described by Collins [7] . This paper contains important implementation details that are not repeated here. We chose Collins' algorithm because it offers good performance and scales to hundreds of thousands of possible feature combinations.", "cite_spans": [ { "start": 667, "end": 671, "text": "[16]", "ref_id": "BIBREF15" }, { "start": 819, "end": 822, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Identifying EDITED words", "sec_num": "2" }, { "text": "This section describes the kinds of linear classifiers that the boosting algorithm infers. Abstractly, we regard each word token as an event characterized by a finite tuple of random variables", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "(Y, X 1 , . . . , X m ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "Y is the the conditioned variable and ranges over {\u22121, +1}, with Y = +1 indicating that the word is not edited. X 1 , . . . , X m are the conditioning variables; each X j ranges over a finite set X j . For example, X 1 is the orthographic form of the word and X 1 is the set of all words observed in the training section of the corpus. Our classifiers use m = 18 conditioning variables. The following subsection describes the conditioning variables in more detail; they include variables indicating the POS tag of the preceding word, the tag of the following word, whether or not the word token appears in a \"rough copy\" as explained below, etc. The goal of the classifier is to predict the value of Y given values for X 1 , . . . , X m . The classifier makes its predictions based on the occurence of combinations of conditioning variable/value pairs called features. A feature F is a set of variable-value pairs X j , x j , with x j \u2208 X j . Our classifier is defined in terms of a finite number n of features F 1 , . . . , F n , where n \u2248 10 6 in our classifiers. 2 Each feature F i de-fines an associated random boolean variable", "cite_spans": [ { "start": 1066, "end": 1067, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "F i = X j ,x j \u2208F i (X j =x j ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "where (X=x) takes the value 1 if X = x and 0 otherwise. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "F i = 1 iff X j = x j for all X j , x j \u2208 F i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "Our classifier estimates a feature weight \u03b1 i for each feature F i , that is used to define the prediction variable Z:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "Z = n i=1 \u03b1 i F i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "The prediction made by the classifier is sign(Z) = Z/|Z|, i.e., \u22121 or +1 depending on the sign of Z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "Intuitively, our goal is to adjust the vector of feature weights \u03b1 = (\u03b1 1 , . . . , \u03b1 n ) to minimize the expected misclassification rate E[(sign(Z) = Y )]. This function is difficult to minimize, so our boosting classifier minimizes the expected Boost loss E[exp(\u2212Y Z)]. As Singer and Schapire [16] point out, the misclassification rate is bounded above by the Boost loss, so a low value for the Boost loss implies a low misclassification rate.", "cite_spans": [ { "start": 295, "end": 299, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "Our classifier estimates the Boost loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "as E t [exp(\u2212Y Z)], where E t [\u2022]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "is the expectation on the empirical training corpus distribution. The feature weights are adjusted iteratively; one weight is changed per iteration. The feature whose weight is to be changed is selected greedily to minimize the Boost loss using the algorithm described in [7] . Training continues for 25,000 iterations. After each iteration the misclassification rate on the development", "cite_spans": [ { "start": 272, "end": 275, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "corpus E d [(sign(Z) = Y )] is estimated, where E d [\u2022]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "is the expectation on empirical development corpus distribution. While each iteration lowers the Boost loss on the training corpus, a graph of the misclassification rate on the development corpus versus iteration number is a noisy U-shaped curve, rising at later iterations due to overlearning. The value of \u03b1 returned word token in our training data. We developed a method for quickly identifying such extensionally equivalent feature pairs based on hashing XORed random bitmaps, and deleted all but one of each set of extensionally equivalent features (we kept a feature with the smallest number of conditioning variables).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "by the estimator is the one that minimizes the misclassficiation rate on the development corpus; typically the minimum is obtained after about 12,000 iterations, and the feature weight vector \u03b1 contains around 8000 nonzero feature weights (since some weights are adjusted more than once). 3 ", "cite_spans": [ { "start": 289, "end": 290, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Boosting estimates of linear classifiers", "sec_num": "2.1" }, { "text": "This subsection describes the conditioning variables used in the EDITED classifier. Many of the variables are defined in terms of what we call a rough copy. Intuitively, a rough copy identifies repeated sequences of words that might be restarts or repairs. Punctuation is ignored for the purposes of defining a rough copy, although conditioning variables indicate whether the rough copy includes punctuation. A rough copy in a tagged string of words is a substring of the form \u03b1 1 \u03b2\u03b3\u03b1 2 , where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "1. \u03b1 1 (the source) and \u03b1 2 (the copy) both begin with non-punctuation, 2. the strings of non-punctuation POS tags of \u03b1 1 and \u03b1 2 are identical, 3 . \u03b2 (the free final) consists of zero or more sequences of a free final word (see below) followed by optional punctuation, and 4. \u03b3 (the interregnum) consists of sequences of an interregnum string (see below) followed by optional punctuation.", "cite_spans": [ { "start": 145, "end": 146, "text": "3", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "The set of free-final words includes all partial words (i.e., ending in a hyphen) and a small set of conjunctions, adverbs and miscellanea, such as and, or, actually, so, etc. The set of interregnum strings consists of a small set of expressions such as uh, you know, I guess, I mean, etc. We search for rough copies in each sentence starting from left to right, searching for longer copies first. After we find a rough copy, we restart searching for additional rough copies following the free final string of the previous copy. We say that a word token is in a rough copy iff it appears in either the source or the free final. 4 (2) is an example of a rough copy.", "cite_spans": [ { "start": 628, "end": 629, "text": "4", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "(2) I thought I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "\u03b1 1 cou-, \u03b2 I mean, \u03b3 I \u03b1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "would finish the work Table 1 lists the conditioning variables used in our classifier. In that table, subscript integers refer to the relative position of word tokens relative to the current word; e.g. T 1 is the POS tag of the following word. The subscript f refers to the tag of the first word of the free final match. If a variable is not defined for a particular word it is given the special value 'NULL'; e.g., if a word is not in a rough copy then variables such as N m , N u , N i , N l , N r and T f all take the value NULL. Flags are booleanvalued variables, while numeric-valued variables are bounded to a value between 0 and 4 (as well as NULL, if appropriate). The three variables C t , C w and T i are intended to help the classifier capture very short restarts or repairs that may not involve a rough copy. The flags C t and C i indicate whether the orthographic form and/or tag of the next word (ignoring punctuation) are the same as those of the current word. T i has a non-NULL value only if the current word is followed by an interregnum string; in that case T i is the POS tag of the word following that interregnum.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "As described above, the classifier's features are sets of variable-value pairs. Given a tuple of variables, we generate a feature for each tuple of values that the variable tuple assumes in the training data. In order to keep the feature set managable, the tuples of variables we consider are restricted in various ways. The most important of these are constraints of the form 'if X j is included among feature's variables, then so is X k '. For example, we require that if a feature contains P i+1 then it also contains P i for i \u2265 0, and we impose a similiar constraint on POS tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditioning variables and features", "sec_num": "2.2" }, { "text": "For the purposes of this research the Switchboard corpus, as distributed by the Linguistic Data Consortium, was divided into four sections and the word immediately following the interregnum also appears in a (different) rough copy, then we say that the interregnum word token appears in a rough copy. This permits us to approximate the Switchboard annotation convention of annotating interregna as EDITED if they appear in iterated edits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "(or subcorpora). The training subcorpus consists of all files in the directories 2 and 3 of the parsed/merged Switchboard corpus. Directory 4 is split into three approximately equal-size sections. (Note that the files are not consecutively numbered.) The first of these (files sw4004.mrg to sw4153.mrg) is the testing corpus. All edit detection and parsing results reported herein are from this subcorpus. The files sw4154.mrg to sw4483.mrg are reserved for future use. The files sw4519.mrg to sw4936.mrg are the development corpus. In the complete corpus three parse trees were sufficiently ill formed in that our tree-reader failed to read them. These trees received trivial modifications to allow them to be read, e.g., adding the missing extra set of parentheses around the complete tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "We trained our classifier on the parsed data files in the training and development sections, and evaluated the classifer on the test section. Section 3 evaluates the parser's output in conjunction with this classifier; this section focuses on the classifier's performance at the individual word token level. In our complete application, the classifier uses a bitag tagger to assign each word a POS tag. Like all such taggers, our tagger has a nonnegligible error rate, and these tagging could conceivably affect the performance of the classifier. To determine if this is the case, we report classifier performance when trained both on \"Gold Tags\" (the tags assigned by the human annotators of the Switchboard corpus) and on \"Machine Tags\" (the tags assigned by our bitag tagger). We compare these results to a baseline \"null\" classifier, which never identifies a word as EDITED. Our basic measure of performance is the word misclassification rate (see Section 2.1). However, we also report precision and recall scores for EDITED words alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "All words are assigned one of the two possible labels, EDITED or not. However, in our evaluation we report the accuracy of only words other than punctuation and filled pauses. Our logic here is much the same as that in the statistical parsing community which ignores the location of punctuation for purposes of evaluation [3, 5, 6] on the grounds that its placement is entirely conventional. The same can be said for filled pauses in the switchboard corpus.", "cite_spans": [ { "start": 322, "end": 325, "text": "[3,", "ref_id": "BIBREF2" }, { "start": 326, "end": 328, "text": "5,", "ref_id": "BIBREF4" }, { "start": 329, "end": 331, "text": "6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "Our results are given in Table 2 . They show that our classifier makes only approximately 1/3 Post-interregnum tag flag ", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 32, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "W 0 Orthographic word P 0 , P 1 , P 2 , P f Partial word flags T \u22121 , T 0 , T 1 , T 2 , T f POS tags N m Number", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical evaluation", "sec_num": "2.3" }, { "text": "We now turn to the second pass of our two-pass architecture, using an \"off-the-shelf\" statistical parser to parse the transcribed speech after having removed the words identified as edited by the first pass. We first define the evaluation metric we use and then describe the results of our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing transcribed speech", "sec_num": "3" }, { "text": "In this section we describe the metric we use to grade the parser output. As a first desideratum we want a metric that is a logical extension of that used to grade previous statistical parsing work. We have taken as our starting point what we call the \"relaxed labeled precision/recall\" metric from previous research (e.g. [3, 5] ). This metric is characterized as follows. For a particular test corpus let N be the total number of nonterminal (and non-preterminal) constituents in the gold standard parses. Let M be the number of such constituents returned by the parser, and let C be the number of these that are correct (as defined below). Then precision = C/M and recall = C/N .", "cite_spans": [ { "start": 323, "end": 326, "text": "[3,", "ref_id": "BIBREF2" }, { "start": 327, "end": 329, "text": "5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "A constituent c is correct if there exists a constituent d in the gold standard such that: The parsing literature uses \u2261 r rather than = because it is felt that two constituents should be considered equal if they disagree only in the placement of, say, a comma (or any other sequence of punctuation), where one constituent includes the punctuation and the other excludes it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "Our new metric, \"relaxed edited labeled precision/recall\" is identical to relaxed labeled precision/recall except for two modifications. First, in the gold standard all non-terminal subconstituents of an EDITED node are removed and the terminal constituents are made immediate children of a single EDITED node. Furthermore, two or more EDITED nodes with no separating non-edited material between them are merged into a single EDITED node. We call this version a \"simplified gold standard parse.\" All precision recall measurements are taken with respected to the simplified gold standard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "Second, we replace \u2261 r with a new equivalence relation \u2261 e which we define as the smallest equivalence relation containing \u2261 r and satisfying begin(c) \u2261 e end(c) for each EDITED node c in the gold standard parse. 6 and PRT are considered to be identical as well. 6 We considered but ultimately rejected defining \u2261e using the EDITED nodes in the returned parse rather Table 2 : Performance of the \"null\" classifier (which never marks a word as EDITED) and boosting classifiers trained on \"Gold Tags\" and \"Machine Tags\". We give a concrete example in Figure 1 . The first row indicates string position (as usual in parsing work, position indicators are between words). The second row gives the words of the sentence. Words that are edited out have an \"E\" above them. The third row indicates the equivalence relation by labeling each string position with the smallest such position with which it is equivalent.", "cite_spans": [ { "start": 213, "end": 214, "text": "6", "ref_id": "BIBREF5" }, { "start": 263, "end": 264, "text": "6", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 367, "end": 374, "text": "Table 2", "ref_id": null }, { "start": 549, "end": 557, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "There are two basic ideas behind this definition. First, we do not care where the EDITED nodes appear in the tree structure produced by the parser. Second, we are not interested in the fine structure of EDITED sections of the string, just the fact that they are EDITED. That we do care which words are EDITED comes into our figure of merit in two ways. First, (noncontiguous) EDITED nodes remain, even though their substructure does not, and thus they are counted in the precision and recall numbers. Secondly (and probably more importantly), failure to decide on the correct positions of edited nodes can cause collateral damage to neighboring constituents by causing them to start or stop in the wrong place. This is particularly relevant because according to our definition, while the positions at the beginning and ending of an edit node are equivalent, the interior positions are not (unless related by the punctuation rule).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "than the simplified gold standard. We rejected this because the \u2261e relation would then itself be dependent on the parser's output, a state of affairs that might allow complicated schemes to improve the parser's performance as measured by the metric. See Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 254, "end": 262, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Parsing metrics", "sec_num": "3.1" }, { "text": "The parser described in [3] was trained on the Switchboard training corpus as specified in section 2.1. The input to the training algorithm was the gold standard parses minus all EDITED nodes and their children.", "cite_spans": [ { "start": 24, "end": 27, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "We tested on the Switchboard testing subcorpus (again as specified in Section 2.1). All parsing results reported herein are from all sentences of length less than or equal to 100 words and punctuation. When parsing the test corpus we carried out the following operations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "1. create the simplified gold standard parse by removing non-terminal children of an EDITED node and merging consecutive EDITED nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "2. remove from the sentence to be fed to the parser all words marked as edited by an edit detector (see below).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "3. parse the resulting sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "4. add to the resulting parse EDITED nodes containing the non-terminal symbols removed in step 2. The nodes are added as high as possible (though the definition of equivalence from Section 3.1 should make the placement of this node largely irrelevant).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "5. evaluate the parse from step 4 against the simplified gold standard parse from step 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "We ran the parser in three experimental situations, each using a different edit detector in step 2. In the first of the experiments (labeled \"Gold Edits\") the \"edit detector\" was simply the simplified gold standard itself. This was to see how well the parser would do it if had perfect information about the edit locations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "In the second experiment (labeled \"Gold Tags\"), the edit detector was the one described in Section 2 trained and tested on the part-ofspeech tags as specified in the gold standard trees. Note that the parser was not given the gold standard part-of-speech tags. We were interested in contrasting the results of this experiment with that of the third experiment to gauge what improvement one could expect from using a more sophisticated tagger as input to the edit detector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "In the third experiment (\"Machine Tags\") we used the edit detector based upon the machine generated tags.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "The results of the experiments are given in Table 3 . The last line in the figure indicates the performance of this parser when trained and tested on Wall Street Journal text [3] . It is the \"Machine Tags\" results that we consider the \"true\" capability of the detector/parser combination: 85.3% precision and 86.5% recall.", "cite_spans": [ { "start": 175, "end": 178, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 44, "end": 51, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Parsing experiments", "sec_num": "3.2" }, { "text": "The general trends of Table 3 are much as one might expect. Parsing the Switchboard data is much easier given the correct positions of the EDITED nodes than without this information. The difference between the Gold-tags and the Machine-tags parses is small, as would be expected from the relatively small difference in the performance of the edit detector reported in Section 2. This suggests that putting significant effort into a tagger for use by the edit detector is unlikely to produce much improvement. Also, as one might expect, parsing conversational speech is harder than Wall Street Journal text, even given the gold-standard EDITED nodes.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "Probably the only aspect of the above numbers likely to raise any comment in the parsing community is the degree to which precision numbers are lower than recall. With the exception of the single pair reported in [3] and repeated above, no precision values in the recent statistical-parsing literature [2, 3, 4, 5, 14] have ever been lower than recall values. Even this one exception is by only 0.1% and not statistically significant.", "cite_spans": [ { "start": 213, "end": 216, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 302, "end": 305, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 306, "end": 308, "text": "3,", "ref_id": "BIBREF2" }, { "start": 309, "end": 311, "text": "4,", "ref_id": "BIBREF3" }, { "start": 312, "end": 314, "text": "5,", "ref_id": "BIBREF4" }, { "start": 315, "end": 318, "text": "14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "We attribute the dominance of recall over precision primarily to the influence of edit-detector mistakes. First, note that when given the gold standard edits the difference is quite small (0.3%). When using the edit detector edits the difference increases to 1.2%. Our best guess is that because the edit detector has high precision, and lower recall, many more words are left in the sentence to be parsed. Thus one finds more nonterminal constituents in the machine parses than in the gold parses and the precision is lower than the recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.3" }, { "text": "While there is a significant body of work on finding edit positions [1, 9, 10, 13, 17, 18] , it is difficult to make meaningful comparisons between the various research efforts as they differ in (a) the corpora used for training and testing, (b) the information available to the edit detector, and (c) the evaluation metrics used. For example, [13] uses a subsection of the ATIS corpus, takes as input the actual speech signal (and thus has access to silence duration but not to words), and uses as its evaluation metric the percentage of time the program identifies the start of the interregnum (see Section 2.2). On the other hand, [9, 10] use an internally developed corpus of sentences, work from a transcript enhanced with information from the speech signal (and thus use words), but do use a metric that seems to be similar to ours. Undoubtedly the work closest to ours is that of Stolcke et al. [18] , which also uses the transcribed Switchboard corpus. (However, they use information on pause length, etc., that goes beyond the transcript.) They categorize the transitions between words into more categories than we do. At first glance there might be a mapping between their six categories and our two, with three of theirs corresponding to EDITED words and three to not edited. If one accepts this mapping they achieve an error rate of 2.6%, down from their NULL rate of 4.5%, as contrasted with our error rate of 2.2% down from our NULL rate of 5.9%. The difference in NULL rates, however, raises some doubts that the numbers are truly measuring the same thing. There is also a small body of work on parsing disfluent sentences [8, 11] . Hindle's early work [11] does not give a formal evaluation of the parser's accuracy. The recent work of Schubert and Core [8] does give such an evaluation, but on a different corpus (from Rochester Trains project). Also, their parser is not statistical and returns parses on only 62% of the strings, and 32% of the strings that constitute sentences. Our statistical parser naturally parses all of our corpus. Thus it does not seem possible to make a meaningful comparison between the two systems.", "cite_spans": [ { "start": 68, "end": 71, "text": "[1,", "ref_id": "BIBREF0" }, { "start": 72, "end": 74, "text": "9,", "ref_id": "BIBREF8" }, { "start": 75, "end": 78, "text": "10,", "ref_id": "BIBREF9" }, { "start": 79, "end": 82, "text": "13,", "ref_id": "BIBREF12" }, { "start": 83, "end": 86, "text": "17,", "ref_id": "BIBREF16" }, { "start": 87, "end": 90, "text": "18]", "ref_id": "BIBREF17" }, { "start": 344, "end": 348, "text": "[13]", "ref_id": "BIBREF12" }, { "start": 634, "end": 637, "text": "[9,", "ref_id": "BIBREF8" }, { "start": 638, "end": 641, "text": "10]", "ref_id": "BIBREF9" }, { "start": 902, "end": 906, "text": "[18]", "ref_id": "BIBREF17" }, { "start": 1638, "end": 1641, "text": "[8,", "ref_id": "BIBREF7" }, { "start": 1642, "end": 1645, "text": "11]", "ref_id": "BIBREF10" }, { "start": 1668, "end": 1672, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 1770, "end": 1773, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Previous research", "sec_num": "4" }, { "text": "We have presented a simple architecture for parsing transcribed speech in which an edited word detector is first used to remove such words from the sentence string, and then a statistical parser trained on edited speech (with the edited nodes removed) is used to parse the text. The edit detector reduces the misclassification rate on edited words from the null-model (marking everything as not edited) rate of 5.9% to 2.2%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "To evaluate our parsing results we have introduced a new evaluation metric, relaxed edited labeled precision/recall. The purpose of this metric is to make evaluation of a parse tree relatively indifferent to the exact tree position of EDITED nodes, in much the same way that the previous metric, relaxed labeled precision/recall, make it indifferent to the attachment of punctuation. By this metric the parser achieved 85.3% precision and 86.5% recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "There is, of course, great room for improvement, both in stand-alone edit detectors, and their combination with parsers. Also of interest are models that compute the joint probabilities of the edit detection and parsing decisionsthat is, do both in a single integrated statistical process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Indeed,[17] suggests that filled pauses tend to indicate clause boundaries, and thus may be a help in parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It turns out that many pairs of features are extensionally equivalent, i.e., take the same values on each", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used a smoothing parameter as described in[7], which we estimate by using a line-minimization routine to minimize the classifier's minimum misclassification rate on the development corpus.4 In fact, our definition of rough copy is more complex. For example, if a word token appears in an interregnum", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Integrating multiple knowledge sources for detection and correction of repairs in humancomputer dialog", "authors": [ { "first": "J", "middle": [], "last": "Bear", "suffix": "" }, { "first": "J", "middle": [], "last": "Dowding", "suffix": "" }, { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" } ], "year": null, "venue": "Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bear, J., Dowding, J. and Shriberg, E. Integrating multiple knowledge sources for detection and correction of repairs in human- computer dialog. In Proceedings of the 30th Annual Meeting of the Association for Com- putational Linguistics. 56-63.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Statistical parsing with a context-free grammar and word statistics", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "598--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence. AAAI Press/MIT Press, Menlo Park, CA, 1997, 598-603.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A maximum-entropyinspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 2000 Conference of the North American Chapter of the Association for Computational Linguistics. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. A maximum-entropy- inspired parser. In Proceedings of the 2000 Conference of the North American Chap- ter of the Association for Computational Linguistics. ACL, New Brunswick NJ, 2000.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A new statistical parser based on bigram lexical dependencies", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. J. A new statistical parser based on bigram lexical dependencies. In Pro- ceedings of the 34th Annual Meeting of the ACL. 1996.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Three generative lexicalized models for statistical parsing", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. J. Three generative lexical- ized models for statistical parsing. In Pro- ceedings of the 35th Annual Meeting of the ACL. 1997, 16-23.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Head-Driven Statistical Models for Natural Language Parsing. University of Pennsylvania", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. J. Head-Driven Statistical Models for Natural Language Parsing. Uni- versity of Pennsylvania, Ph.D. Dissertation, 1999.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. J. Discriminative reranking for natural language parsing. In Proceedings of the International Conference on Machine Learning (ICML 2000). 2000.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A syntactic framework for speech repairs and other disruptions", "authors": [ { "first": "M", "middle": [ "G" ], "last": "Core", "suffix": "" }, { "first": "L", "middle": [ "K" ], "last": "Schubert", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "413--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "Core, M. G. and Schubert, L. K. A syn- tactic framework for speech repairs and other disruptions. In Proceedings of the 37th An- nual Meeting of the Association for Compu- tational Linguistics. 1999, 413-420.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Intonational boundaries, speech repairs and discourse markers: modeling spoken dialog", "authors": [ { "first": "P", "middle": [ "A" ], "last": "Heeman", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1997, "venue": "35th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "254--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeman, P. A. and Allen, J. F. Into- national boundaries, speech repairs and dis- course markers: modeling spoken dialog. In 35th Annual Meeting of the Association for Computational Linguistics and 17th Interna- tional Conference on Computational Linguis- tics. 1997, 254-261.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speech repairs, intonational phrases and discourse markers: modeling speakers' utterances in spoken dialogue", "authors": [ { "first": "P", "middle": [ "A" ], "last": "Heeman", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Allen", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "254", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeman, P. A. and Allen, J. F. Speech repairs, intonational phrases and discourse markers: modeling speakers' utterances in spoken dialogue. Computational Linguistics 254 (1999).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deterministic parsing of syntactic non-fluencies", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1983, "venue": "Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D. Deterministic parsing of syn- tactic non-fluencies. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics. 1983, 123-128.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Statistical decision-tree models for parsing", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Magerman", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "276--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magerman, D. M. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Com- putational Linguistics. 1995, 276-283.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A corpus-based study of repair cues in spontaneous speech", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Nakatani", "suffix": "" }, { "first": "J", "middle": [], "last": "Hirschberg", "suffix": "" } ], "year": 1994, "venue": "Journal of the Acoustical Society of America", "volume": "953", "issue": "", "pages": "1603--1616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nakatani, C. H. and Hirschberg, J. A corpus-based study of repair cues in sponta- neous speech. Journal of the Acoustical Soci- ety of America 953 (1994), 1603-1616.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to parse natural language with maximum entropy models", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "151--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi, A. Learning to parse natu- ral language with maximum entropy models. Machine Learning 34 1/2/3 (1999), 151-176.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Preliminaries to a Theory of Speech Disfluencies", "authors": [ { "first": "E", "middle": [ "E" ], "last": "Shriberg", "suffix": "" } ], "year": 1994, "venue": "PhD Dissertation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shriberg, E. E. Preliminaries to a The- ory of Speech Disfluencies. In PhD Disserta- tion. Department of Psychology, University of California-Berkeley, 1994.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improved boosting algorithms using confidencebased predictions", "authors": [ { "first": "Y", "middle": [], "last": "Singer", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Schapire", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Eleventh Annual Conference on Computational Learning Theory", "volume": "", "issue": "", "pages": "80--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Singer, Y. and Schapire, R. E. Im- proved boosting algorithms using confidence- based predictions. In Proceedings of the Eleventh Annual Conference on Computa- tional Learning Theory. 1998, 80-91.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic linguistic segmantation of conversational speech", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 4th International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A. and Shriberg, E. Auto- matic linguistic segmantation of conversa- tional speech. In Proceedings of the 4th In- ternational Conference on Spoken Language Processing (ICSLP-96). 1996.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic detection of sentence boundaries and disfluencies based on recognized words", "authors": [ { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "R", "middle": [], "last": "Bates", "suffix": "" }, { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani", "suffix": "" }, { "first": "M", "middle": [], "last": "Plauche", "suffix": "" }, { "first": "G", "middle": [], "last": "T\u00fcr", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the International Conference on Spoken Language Processing", "volume": "5", "issue": "", "pages": "2247--2250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, A., Shriberg, E., Bates, R., Ostendorf, M., Hakkani, D., Plauche, M., T\u00fcr, G. and Lu, Y. Automatic detec- tion of sentence boundaries and disfluencies based on recognized words. Proceedings of the International Conference on Spoken Lan- guage Processing 5 (1998), 2247-2250.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "1. label(c) = label(d) For some reason, starting with[12] the labels ADVP2. begin(c) \u2261 r begin(d) 3. end(c) \u2261 r end(d)In 2 and 3 above we introduce an equivalence relation \u2261 r between string positions. We define \u2261 r to be the smallest equivalence relation satisfying a \u2261 r b for all pairs of string positions a and b separated solely by punctuation symbols.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Equivalent string positions as defined by \u2261 e .", "num": null }, "TABREF0": { "type_str": "table", "content": "
N iNumber of words in interregnum
N lNumber of words to left edge of source
N rNumber of words to right edge of source
C tFollowed by identical tag flag
C wFollowed by identical word flag
T i
", "text": "of words in common in source and copy N uNumber of words in source that do not appear in copy", "num": null, "html": null }, "TABREF1": { "type_str": "table", "content": "
of the misclassification errors made by the null
classifier (0.022 vs. 0.059), and that using the
POS tags produced by the bitag tagger does
not have much effect on the classifier's perfor-
mance (e.g., EDITED recall decreases from 0.678
to 0.668).
", "text": "Conditioning variables used in the EDITED classifier.", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "", "text": "Results of Switchboard parsing, sentence length \u2264 100.", "num": null, "html": null } } } }