{ "paper_id": "N10-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:49:50.455636Z" }, "title": "Formatting Time-Aligned ASR Transcripts for Readability", "authors": [ { "first": "Maria", "middle": [], "last": "Shugrina", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc. New York", "location": { "postCode": "10011", "region": "NY" } }, "email": "" }, { "first": "Michiel", "middle": [], "last": "Bacchiani", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc. New York", "location": { "postCode": "10011", "region": "NY" } }, "email": "" }, { "first": "Martin", "middle": [], "last": "Jansche", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc. New York", "location": { "postCode": "10011", "region": "NY" } }, "email": "" }, { "first": "Michael", "middle": [], "last": "Riley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc. New York", "location": { "postCode": "10011", "region": "NY" } }, "email": "" }, { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Google Inc. New York", "location": { "postCode": "10011", "region": "NY" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We address the problem of formatting the output of an automatic speech recognition (ASR) system for readability, while preserving wordlevel timing information of the transcript. Our system enriches the ASR transcript with punctuation, capitalization and properly written dates, times and other numeric entities, and our approach can be applied to other formatting tasks. The method we describe combines hand-crafted grammars with a class-based language model trained on written text and relies on Weighted Finite State Transducers (WF-STs) for the preservation of start and end time of each word.", "pdf_parse": { "paper_id": "N10-1023", "_pdf_hash": "", "abstract": [ { "text": "We address the problem of formatting the output of an automatic speech recognition (ASR) system for readability, while preserving wordlevel timing information of the transcript. Our system enriches the ASR transcript with punctuation, capitalization and properly written dates, times and other numeric entities, and our approach can be applied to other formatting tasks. The method we describe combines hand-crafted grammars with a class-based language model trained on written text and relies on Weighted Finite State Transducers (WF-STs) for the preservation of start and end time of each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The output of a typical ASR system lacks punctuation, capitalization and proper formatting of entities such as phone numbers, time expressions and dates. Even if such automatic transcript is free of recognition errors, it is difficult for a human to parse. The proper formatting of the transcript gains particular importance in applications where the user relies on ASR output for information and where informationrich numeric entities (e.g. time expressions, monetary amounts) are common. A good example of such application is a voicemail transcription system. The goal of our work is to transform the raw transcript into its proper written form in order to optimize it for the visual scanning task by the end user. We present quantitative and qualitative evaluation of our system with a focus on numeric entity formatting, punctuation and capitalization (See Fig. 1 ).", "cite_spans": [], "ref_spans": [ { "start": 861, "end": 867, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "Apart from text, the ASR output usually contains word-level metadata such as time-alignment and confidence. Such quantities may be useful for a variety of applications. Although simple to recover Raw Transcript: hi bill it's tracy at around three thirty P M just got an apartment for one thousand three thirty one thousand four hundred a month my number is five five five eight eight eight eight extension is three thirty bye Our Result: Hi Bill, it's Tracy at around 3:30 PM, just got an apartment for 1,330 1,400 a month. My number is 555-8888 extension is 330. Bye. Figure 1 : An example of a raw transcript with ambiguous written forms and the output of our formatting system. via word alignment after some types of formatting, word-level quantities may be difficult to preserve if the original text has undergone a significant transformation. We present a formal and general augmentation of our WFST-based technique that preserves word-level timing and confidence information during arbitrary formatting.", "cite_spans": [], "ref_spans": [ { "start": 569, "end": 577, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "The problems of sentence boundary detection and punctuation of transcripts have received a substantial amount of attention, e.g. (Beeferman et al., 1998; Shriberg et al., 2000; Christensen et al., 2001; Liu et al., 2006; Gravano et al., 2009) . Capitalization of ASR transcripts received less attention (Brown and Coden, 2002; Gravano et al., 2009) , but there has also been work on case restoration in the context of machine translation (Chelba and Acero, 2006; Wang et al., 2006) . Our work does not propose competing methods for transcript punctuation and capitalization. Instead, we aim to provide a common framework for a wide range of formatting tasks. Our method extends the approach of Gravano et al. (2009) with a general WFST formulation suitable for formatting monetary amounts, time expressions, dates, phone numbers, honorifics and more, in addition to punctuation and capitalization.", "cite_spans": [ { "start": 129, "end": 153, "text": "(Beeferman et al., 1998;", "ref_id": "BIBREF2" }, { "start": 154, "end": 176, "text": "Shriberg et al., 2000;", "ref_id": "BIBREF17" }, { "start": 177, "end": 202, "text": "Christensen et al., 2001;", "ref_id": "BIBREF6" }, { "start": 203, "end": 220, "text": "Liu et al., 2006;", "ref_id": "BIBREF13" }, { "start": 221, "end": 242, "text": "Gravano et al., 2009)", "ref_id": "BIBREF7" }, { "start": 303, "end": 326, "text": "(Brown and Coden, 2002;", "ref_id": "BIBREF3" }, { "start": 327, "end": 348, "text": "Gravano et al., 2009)", "ref_id": "BIBREF7" }, { "start": 438, "end": 462, "text": "(Chelba and Acero, 2006;", "ref_id": "BIBREF4" }, { "start": 463, "end": 481, "text": "Wang et al., 2006)", "ref_id": "BIBREF18" }, { "start": 694, "end": 715, "text": "Gravano et al. (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "To our knowledge, this scope of the problem has not been addressed in literature. Yet such formatting can have a high impact on transcript readability. In this paper we focus on numeric entity format-ting. In general, context independent rules fail to adequately perform this task due to its inherent ambiguity (See Fig. 1 ). For example, the spoken words \"three thirty\" should be written differently in these three contexts:", "cite_spans": [], "ref_spans": [ { "start": 316, "end": 322, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "\u2022 meet me at 3:30", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "\u2022 you owe me 330", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "\u2022 dinner for three 30 minutes later The proper written form of a numeric entity depends on its class (time, monetary amount, etc). In this sense, formatting is related to the problem of named entity (NE) detection and value extraction, as defined by MUC-7 (Chinchor, 1997) . Several authors have considered the problem of NE value extraction from raw transcripts (Huang et al., 2001; Jansche and Abney, 2002; B\u00e9chet et al., 2004; Levit et al., 2004) . This is an information extraction task that involves identifying transcript words corresponding to a particular NE class and extracting an unambiguous value of that NE (e.g. the value of the date NE \"december first oh nine\" is \"12/01/2009\"). Although relevant, this information extraction does not directly address the problem of proper formatting and ordinarily requires a tagged corpus for training.", "cite_spans": [ { "start": 256, "end": 272, "text": "(Chinchor, 1997)", "ref_id": "BIBREF5" }, { "start": 363, "end": 383, "text": "(Huang et al., 2001;", "ref_id": "BIBREF9" }, { "start": 384, "end": 408, "text": "Jansche and Abney, 2002;", "ref_id": "BIBREF10" }, { "start": 409, "end": 429, "text": "B\u00e9chet et al., 2004;", "ref_id": "BIBREF1" }, { "start": 430, "end": 449, "text": "Levit et al., 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "A parallel corpus containing raw transcriptions and the corresponding formatted strings would facilitate the solution to the transcript formatting problem. However, there is no such corpus available. Therefore, we follow the approach of Gravano et al. and provide an approximation that exploits readily available written text instead. In section 2 we detail our method, provide a probabilistic interpretation and present a practical formulation of the solution in terms of WFSTs. Section 3 shows how to augment the WFST formulation to preserve wordlevel timing and confidence. Section 4 presents both qualitative and quantitative evaluation of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction and Prior Work", "sec_num": "1" }, { "text": "First, handwritten grammars are used to generate all plausible written forms. These variants are then scored with a language model (LM) approximating probability over written strings. To overcome data sparsity associated with written numeric strings, we introduce numeric classes into the LM. In section 2.1 we give a probabilistic formulation of this approach. In section 2.2 we comment on the handwritten grammars, and in section 2.3 we discuss the class-based language model used for scoring. Section 2.4 provides the WFST formulation of the solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "The problem of estimating the best written form\u0175 of a spoken sequence of words s can be formulated as a Machine Translation (MT) problem of translating a string s from the language of spoken strings into a language of written strings. From a statistical standpoint,\u0175 can be estimated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "2.1" }, { "text": "w = argmax w {P (w|s)} \u2248 argmax w {P \u2032 (s|w)P \u2032 (w)},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "2.1" }, { "text": "where P (\u2022) denotes probability, and P \u2032 (\u2022) a probability approximation. The probability over written strings P (w) can be estimated by training an n-gram language model on amply available written text. The absence of a parallel corpus containing sequences of spoken words and their written renditions makes the conditional distribution P (s|w) impossible to estimate. An approximation P \u2032 (s|w) can be obtained by defining handwritten grammars that generate multiple unweighted written variants for any spoken sequence. For a given s, a collection of grammars encodes a uniform probability distribution across the set of all written variants generated for s and assigns a zero probability to any string not in this set. Such grammar-based modeling of P (s|w) combined with statistical estimation of P (w) takes advantage of prior knowledge, but does not share the disadvantages of rigid, fully rule-based systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Formulation", "sec_num": "2.1" }, { "text": "Handwritten grammars G 1 ...G m are used to generate unweighted written variants for a raw string s. In Gravano's work (Gravano et al., 2009) the generated variants include optional punctuation between every two words and an optional capitalization for every word. Our system supports a wider range of variants, including but not limited to multiple variants of number formatting.", "cite_spans": [ { "start": 119, "end": 141, "text": "(Gravano et al., 2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Handwritten Grammars", "sec_num": "2.2" }, { "text": "The handwritten grammars can be very restrictive or very liberal, depending on the application requirements. For example, a grammar we use to generate punctuation and capitalization only generates sentences with the first word capitalized. This enforces conventions and consistency, which the best scoring variant could occasionally violate. On the other Figure 2: An FSA encoding all variants generated by the number grammar for a spoken string \"three thirty\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Handwritten Grammars", "sec_num": "2.2" }, { "text": "hand, the grammar for number formatting could be very liberal in producing written variants (See Fig. 2 ). Jansche and Abney (2002) observe that handwritten rules deterministically tagging numeric strings of certain length as phone numbers perform surprisingly well on phone number NE identification in voicemail. If appropriate to the task, deterministic grammars can be incorporated into the grammar stack. The unweighted written variants generated by applying G 1 ...G m to s are then scored with the language model.", "cite_spans": [ { "start": 107, "end": 131, "text": "Jansche and Abney (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 97, "end": 103, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Handwritten Grammars", "sec_num": "2.2" }, { "text": "The probability distribution over written text P (w) can be approximated by a Katz back-off n-gram language model trained on written text in a domain semantically similar to the domain for which the ASR engine is deployed. Unlike some of the approaches used for NE identification (Jansche and Abney, 2002; Levit et al., 2004) and sentence boundary detection (Christensen et al., 2001; Shriberg et al., 2000; Liu et al., 2006) , LM-based scoring cannot exploit a larger context than n tokens or prosodic features. The advantage of the LM approach is the ease of applying it to new formatting tasks: no new tagged corpus, and only trivial changes to the preprocessing of the training text would be required. If the LM is to score written numeric strings, care must be taken in modeling numbers. Representing each written number as a token (e.g. tokens \"1,235\", \"15\") during training results in a very large model and suffers from data sparsity even with very large training corpora. An alternative approach of modeling every digit as a token (e.g. \"15\" is comprised of tokens \"1\" and \"2\") fails to model sufficient context for longer digit strings. A partially class-based LM remedies the drawbacks of both approaches, and has been used for tasks such as NE tagging (B\u00e9chet et Class Set A Numeric range Interpretation 2-9 single digits 10-12 up to hour in a 12-hour system 13-31 up to the largest day of the month 32-59 up to the largest minute in a time expression other 2-digit all other 2-digit numbers other 3-digit all 3-digit numbers 1900 -2099 common year numbers other 4-digit all other 4-digit numbers 10000-99999 all 5-digit numbers; e.g. US zipcodes \u2265 100000 all large numbers", "cite_spans": [ { "start": 280, "end": 305, "text": "(Jansche and Abney, 2002;", "ref_id": "BIBREF10" }, { "start": 306, "end": 325, "text": "Levit et al., 2004)", "ref_id": "BIBREF12" }, { "start": 358, "end": 384, "text": "(Christensen et al., 2001;", "ref_id": "BIBREF6" }, { "start": 385, "end": 407, "text": "Shriberg et al., 2000;", "ref_id": "BIBREF17" }, { "start": 408, "end": 425, "text": "Liu et al., 2006)", "ref_id": "BIBREF13" }, { "start": 1264, "end": 1274, "text": "(B\u00e9chet et", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "2.3" }, { "text": "Interpretation 0-9 one digit string 10-99 two digit string ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class Set B Numeric range", "sec_num": null }, { "text": "... 10 9 \u2212 (10 10 \u2212 1) ten-digit string \u2265 10 10 longer digit string Table 1 : Two sets of number classes used in our system. Each sequence of consecutive digit characters is mapped to the appropriate class. For example, \"$1,235.12\" would become \" dollar 1 comma num 100 999 period num 10 12 \" in Class Set A and \" dollar num 1D comma num 3D period num 2D in Class Set B.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Class Set B Numeric range", "sec_num": null }, { "text": "al., 2004). The generalization provided by classes eliminates data sparsity, and is able to model sufficient context. We experiment with two sets of classes (See Table 1). Class Set B, based on (B\u00e9chet et al., 2004) , marks strings of n consecutive digits as belonging to an n-digit class, assuming nothing about the number distribution. Class Set A is based on intuition about number distribution in text (See Table 1 , Interpretation). In section 4.4 we show that Class Set A achieves better performance on number formatting. Now that it is established that the choice of classes affects performance, future research could focus on finding an optimal set of number classes automatically. Clustering techniques, often used to derive class definitions from training text, could be applied.", "cite_spans": [ { "start": 194, "end": 215, "text": "(B\u00e9chet et al., 2004)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 411, "end": 418, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Class Set B Numeric range", "sec_num": null }, { "text": "Although more punctuation marks could be considered, we focus on periods and commas. Similarly to Gravano et al. (2009) , we map all other punctuation marks in the training text to these two. In many formatting scenarios (e.g. spelled out acronyms, numeric ranges), spaces are ambiguous and significant, and it is therefore important to consider whitespace when scoring the written variants. Because of this, we model space as a token in the LM.", "cite_spans": [ { "start": 98, "end": 119, "text": "Gravano et al. (2009)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Class Set B Numeric range", "sec_num": null }, { "text": "The one-best 1 ASR output s can be represented by a Finite State Acceptor (FSA) S. We describe a series of standard WFST operations on S resulting in the FSA W best encoding the best estimated formatted variant\u0175. Current section assumes familiarity with WFSTs; for background see (Mohri, 2009) . We encode each grammar G i as an unweighted FST T i that transduces the raw transcript to its formatted versions. The necessity to encode them as FSTs restricts the set of grammars to regular grammars (Hopcroft and Ullman, 1979) , sufficiently powerful for most formatting tasks. The back-off n-gram LM is naturally represented as a weighted deterministic FSA G with negative log probability weights (Mohri et al., 2008) . The deterministic mapping of digit strings to number class tokens can also be accomplished by an unweighted transducer K, which passes all non-numeric strings unchanged.", "cite_spans": [ { "start": 280, "end": 293, "text": "(Mohri, 2009)", "ref_id": "BIBREF15" }, { "start": 497, "end": 524, "text": "(Hopcroft and Ullman, 1979)", "ref_id": "BIBREF8" }, { "start": 696, "end": 716, "text": "(Mohri et al., 2008)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "WFST Formulation", "sec_num": "2.4" }, { "text": "Composing the input acceptor S with the grammar transducers T i results in a transducer W with all written variants on the output. Projected onto its output labels, W becomes an acceptor W out . W class , the result of the composition of W out with K, has all formatted written variants on the input side and the formatted variants with digit strings replaced by class tokens on the output. The output side of W class can then be scored via composition with G to produce a weighted transducer W scored . The shortest path in the Tropical Semiring on W scored contains the estimate of the best written variant on the input side. This algorithm can be summarized as follows (See Fig. 3 ):", "cite_spans": [], "ref_spans": [ { "start": 677, "end": 683, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "WFST Formulation", "sec_num": "2.4" }, { "text": "1. W = S \u2022 T 1 \u2022 T 2 ... \u2022 T m 2. W out = Proj out (W ) 3. W class = W out \u2022 K 4. W scored = W class \u2022 G 5. W best = Proj in (BestPath(W scored ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WFST Formulation", "sec_num": "2.4" }, { "text": "where \u2022 denotes FST composition, Proj in and Proj out denote projection on input and output labels respectively, and BestPath(X) as a function returning an FST encoding the shortest path of X. The key Step 2 ensures that the target written variants are not consumed in the consequent composition operations. For efficiency reasons it is advisable to apply optimizations such as epsilon removal and determinization to the intermediate results. 2 ", "cite_spans": [ { "start": 443, "end": 444, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WFST Formulation", "sec_num": "2.4" }, { "text": "We extend the WFST formulation to preserve wordlevel timing and confidence information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preserving Word-Level Metadata", "sec_num": "3" }, { "text": "A WFST is a finite set of states and transitions connecting them. Each transition has an input label, an output label and a weight in some semiring K. A semiring is informally defined as a touple (K, \u2295, \u2297, 0, 1), where K is the set of elements, \u2295 and \u2297 are the addition and multiplication operations, 0 is the additive identity and multiplicative annihilator, 1 is the multiplicative identity (See (Mohri, 2009) ). By defining new semirings we can use standard FST operations to accomplish a wide range of goals.", "cite_spans": [ { "start": 398, "end": 411, "text": "(Mohri, 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3.1" }, { "text": "In order to formulate time preservation within the FST formalism, we define the timing semiring K t where each element is a pair (s, e) that can be interpreted as the start and end time of a word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Timing Semiring", "sec_num": "3.2" }, { "text": "W t = (s, e) : s, e \u2208 R + \u222a {0, \u221e} (s 1 , e 1 ) \u2295 (s 2 , e 2 ) = (max(s 1 , s 2 ), min(e 1 , e 2 )) (s 1 , e 1 ) \u2297 (s 2 , e 2 ) = (min(s 1 , s 2 ), max(e 1 , e 2 )) 0 = (0, \u221e) 1 = (\u221e, 0)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Timing Semiring", "sec_num": "3.2" }, { "text": "Intuitively, the addition operation takes the largest interval contained by both operand intervals, while multiplication returns the smallest interval fully containing both operand intervals. 3 This definition fulfills all the semiring properties as defined in (Mohri, 2009) . Note that encoding only the duration of each word is not sufficient, as there may be time gaps between the words due to the segmentation of the source audio. LetS denote the Weighted Finite State Acceptor (WFSA) encoding the raw ASR output with the start and end time stored in the weight of each arc. In order to preserve word-level confidence in addition to timing information, a Cartesian product of K t and the Log semiring can be used to store both time and confidence in an arc weight.", "cite_spans": [ { "start": 261, "end": 274, "text": "(Mohri, 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Timing Semiring", "sec_num": "3.2" }, { "text": "The goal is to associate the timing/confidence weights ofS with the word labels of W best , the best formatted string (See Sec. 2.4). Because the weight of each transition inS already expresses the timing/confidence corresponding to its word label, it is sufficient to associate the labels ofS with the labels of W best . This is equivalent to identifying the output labels to which each input label is transduced during", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Synchronization", "sec_num": "3.3" }, { "text": "Step 1 in section 2.4. However, in general WFST operations may desynchronize input and output labels and weights, as the FST structure itself does not indicate a semantic correspondence between them. To alleviate this, we guarantee such a correspondence in our grammars by enforcing that for all paths in any grammar FST T i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Synchronization", "sec_num": "3.3" }, { "text": "\u2022 an input label appears before any of the corresponding output labels, and \u2022 output labels corresponding to a given input label appear before the next input label. In practice, these assumptions are usually met by handwritten grammars. Even if these assumptions are violated for a small number of paths, only small word-level timing discrepancies will be incurred. Each path in W can be thought of as a sequence of subpaths with only the first transition containing a non-\u01eb input label. We say that the input label of each such subpath corresponds to that subpath's output labels. The best path that has input labels corresponding to the raw ASR output can be obtained by composing the variants FST W with the best formatted FSA W best and picking any path. The timing weights are restored to by composing the weightedS with this result. To preserve timing we add two more steps to Steps 1-5 in section 2.4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Synchronization", "sec_num": "3.3" }, { "text": "6. W raw:best = RmEps(AnyPath(W \u2022 W best )) 7.W best =S \u2022 Map t (W raw:best )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Weight Synchronization", "sec_num": "3.3" }, { "text": "where RmEps(X) applies the epsilon-removal algorithm to X (Mohri, 2009) , and Map t (X) maps all non-zero weights of X to the unity weight in the timing semiring. BecauseS is an epsilon-free acceptor, the resultW best will contain the original weights ofS on the arcs with the corresponding input labels (See Fig. 4 for an example) . The space-delimited words and the corresponding weights can then be read off by walkingW best .", "cite_spans": [ { "start": 58, "end": 71, "text": "(Mohri, 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 309, "end": 331, "text": "Fig. 4 for an example)", "ref_id": null } ], "eq_spans": [], "section": "Weight Synchronization", "sec_num": "3.3" }, { "text": "Section 4.1 presents our datasets and an evaluation metric specific to number formatting, and section 4.2 describes our experimental system. We present quantitative evaluation of capitalization/punctuation performance and number formatting performance separately in sections 4.3 and 4.4. Because the ultimate goal of our work is to improve the readability of ASR transcripts, we also present the result of a user study of transcript readability in section 4.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "The training corpus contains 185M tokens of written text normalized to contain only comma and period punctuation marks. A set of 176M tokens (TRS) is used for training and a set of 7M tokens (PTS) is held back for testing punctuation and capitalization (See Table 3 ). To obtain a test input (NPTS) for our system, PTS is lowercased and all punctuation is removed. Number formatting is evaluated on a manually formatted test set. We manually processed the set of raw manual transcripts (NNTS) from the LDC Voicemail Part I training set (Padmanabhan et al., 1998) to obtain a reference number formatting set (NTS). All numeric entities in NTS were formatted according to the following conventions:", "cite_spans": [ { "start": 536, "end": 562, "text": "(Padmanabhan et al., 1998)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 258, "end": 265, "text": "Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "\u2022 all quantities under 10 are spelled out \u2022 large amounts include commas: \"x,xxx,xxx\" All contiguous sequences of words in NTS that could be a target for number formatting were marked as numeric entities, whether or not these words were formatted by the labeler (for example \"six\" is a numeric entity). To evaluate number formatting performance, we process NNTS with our full experimental system, then remove all capitalization and inter-word punctuation. This result is aligned with NTS, and each entity is scored separately as totally correct or totally incorrect (See Table 2 ), yielding:", "cite_spans": [], "ref_spans": [ { "start": 571, "end": 578, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "Numeric Entity Error Rate = 100", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "\u2022 I N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "where I is the count of entities that did not match the reference entity string exactly and N is the total entity count. This error rate is independent of the numeric entity density in the test set. The errors are broken down into three types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "\u2022 incorrect formatting -when the system incorrectly formats an entity that is formatted in the reference \u2022 overformatting -when the system formats an entity that stays unformatted in the reference \u2022 underformatting -when the system does not format an entity formatted in the reference Out of 1801 voicemail transcripts in NTS, 1347 contain at least one entity for a total of 3563 entities, signifying a frequent occurrence of numeric entities in voicemail. There is an average of 7 raw transcript words per entity, suggesting that in many cases entity formatting is non-trivial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data and Metrics", "sec_num": "4.1" }, { "text": "The experimental system includes a 5-gram LM trained on TRS with spaces treated as tokens. Number evaluation is performed with two sets of number classes, listed in Table 1 . System A contains LM with classes from set A, and System B contains LM with classes from set B. The experimental setup also includes the following grammars:", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental System", "sec_num": "4.2" }, { "text": "\u2022 G phone -deterministically formats as a phone number any string spoken like a US 7 or 10 digit phone number \u2022 G number -expands all spoken numbers to a full range of variants, with support for time expressions, ordinals, decimals, dollar amounts \u2022 G cap punct -generates all possible combinations of commas, periods and capitals; always capitalizes the first word of a sentence incorrect -correct -incorrect Table 2 : A example of a raw transcript, reference transcript with number formatting and the hypothesis produced by the system. The entities (bold) in reference and hypothesis are aligned and scored.", "cite_spans": [], "ref_spans": [ { "start": 410, "end": 417, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental System", "sec_num": "4.2" }, { "text": "To evaluate the performance of capitalization and punctuation we run System A on NPTS with only the G cap punct (in order not to introduce errors due to numeric formatting). The precision, recall and Fmeasure rates for periods, commas and capitals are computed using PTS as reference (See Fig. 5 ). It should be noted that a 5-gram language model that treats spaces as words models the same history as a 3-gram model that omits the spaces from training data. When this is taken into account, our results with a much smaller training set are comparable to Gravano et al. (2009) . The F-measure scores for commas and periods are also comparable to the prosody-based work of (Christensen et al., 2001) , with the precision of the period slightly lower, but compensated by recall. Thus, our system can perform additional formatting, while retaining a reasonable capitalization and punctuation performance.", "cite_spans": [ { "start": 555, "end": 576, "text": "Gravano et al. (2009)", "ref_id": "BIBREF7" }, { "start": 672, "end": 698, "text": "(Christensen et al., 2001)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 289, "end": 295, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Evaluation of Punctuation", "sec_num": "4.3" }, { "text": "We evaluate number formatting performance of Systems A and B, which use different sets of classes for the language modeling (See Table 1 ). We process NNTS with both systems and score against the reference formatted set NTS to obtain Numeric Entity Error Rate (NEER). Class Set B naively breaks numbers into classes by digit count. System B using this class set performs worse than System A by 1.7% absolute (See Table 4 ). In particular, the overformatting rate (OFR) is higher by 1.2% absolute in System B than in System A. An example of overformatting is the mis-formatting of the English impersonal pronoun \"one\" as the digit \"1\". Such overformatting errors are much more noticeable than the underfor-NEER IFR OFR UFR System A exact 16.1% 9.7% 5.4% 1.0% ignore space 11.2% 4.9% 5.4% 1.0% System B exact 17.8% 10.6% 6.6% 0.6% ignore space 13.2%", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 1", "ref_id": null }, { "start": 413, "end": 420, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Number Formatting", "sec_num": "4.4" }, { "text": "6.0% 6.6% 0.6% Table 4 : The total NEER score, NEER due to incorrect formatting (IFR), NEER due to overformatting (OFR) and NEER due to underformatting (UFR); NEER rates with whitespace errors ignored are also listed.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of Number Formatting", "sec_num": "4.4" }, { "text": "matting errors, which are higher by 0.4% absolute in System A. This result shows that the choice of classes for the class-based LM significantly impacts number formatting performance. Superior overall performance of System A suggests that prior knowledge in the choice of classes favorably impacts performance. In order to estimate the error rate not caused by whitespace errors, we also compute the NEER with whitespace errors ignored. It turns out that between 4 and 5% absolute of the errors are whitespace errors. Even if all whitespace errors are significant, the 83.9% of perfectly formatted entities suggests that the proposed formatting approach can achieve good performance on the number formatting task. To estimate how well the systems perform on specific number formatting tasks we count the number of reference entities containing certain formatting characters and compute the number of these entities correctly formatted by Systems A and B (See Table 5 ). The count of different formatting characters in NTS is small, but still provides an estimate of the number formatting performance for a real appli-cation like voicemail transcription. System A performs significantly better on the formatting of time expressions containing a colon, getting 74.8% correct. The NEER of System A for entities containing special formatting characters is under 28% for all formatting characters except comma, which is used inconsistently in training text.", "cite_spans": [], "ref_spans": [ { "start": 959, "end": 966, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation of Number Formatting", "sec_num": "4.4" }, { "text": "In addition to quantitative evaluation we have conducted a small-scale study of transcript readability. The study aims to compare raw ASR transcripts, ASR transcripts formatted by our system and raw manual transcripts. We have processed LDC Voicemail Part 1 with our ASR engine achieving an error rate of 30%, and have selected 50 voicemails with error rate under 30% and high informational content. Messages containing names, addresses and numbers were preferred. The word error rate on the selected voicemails is 20%. For each voicemail we have constructed three semantic multiple-choice questions, aimed at information extraction. We have asked each of 15 volunteers to answer all 3 questions about half of the voicemails. The questions were shown in sequence, while the transcript remained on the screen. The transcript for each voicemail was randomly selected to be ASR raw, ASR formatted or manual raw. The response time was measured individually for each question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Evaluation", "sec_num": "4.5" }, { "text": "The analysis of the responses reveals a statistically significant difference in response time between formatted and raw ASR transcripts (p = 0.02, even allowing for per-item and per-subject effects; see also Fig. 6 ) and comparable accuracy. The response times for formatted ASR were comparable to the response times for manual unformatted transcripts. This suggests that for transcripts with low error rates the formatting of the ASR output significantly impacts readability. This disagrees with a similar study (Jones et al., 2003) , which found no significant difference in the comprehension rates between raw ASR transcripts and capitalized, punctuated ASR output with disfluencies removed. This could be due to a number of factors, including a different type of transformation performed on the ASR transcript, a different corpus, and a lower word error rate of transcripts in our user study. 90.0% 90.7% 94.4% Figure 6 : The standard R box plot of the response time for different transcript types and the corresponding accuracy.", "cite_spans": [ { "start": 513, "end": 533, "text": "(Jones et al., 2003)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 208, "end": 214, "text": "Fig. 6", "ref_id": null }, { "start": 915, "end": 923, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Evaluation", "sec_num": "4.5" }, { "text": "We present a statistical approach suitable for a wide range of formatting tasks, including but not limited to punctuation, capitalization and numeric entity formatting. The average of 2 numeric entities per voicemail in the manually processed LDC Voicemail corpus shows that number formatting is important for applications such as voicemail transcription. Our best system achieves a Numeric Entity Error Rate of 16.1% on the ambiguous task of numeric entity formatting, while retaining capitalization and punctuation performance comparable to other published work. Our algorithm is concisely formulated in terms of WFSTs and is easily extended to new formatting tasks without the need for additional training data. In addition, the WFST formulation allows word-level timing and confidence to be retained during formatting. In order to overcome data sparsity associated with written numbers, we use a class-based language model and show that the choice of number classes significantly impacts number formatting performance. Finally, a statistically significant difference in question answering time for raw and formatted ASR transcripts in our user study demonstrates the positive impact of the transcript formatting on the readability of errorful ASR transcripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "This WFST formulation can also be applied to the ASR lattice or n-best list with some modification to the scoring phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our system implements proper failure transitions available in the OpenFST Library(Allauzen et al., 2007).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this is just a Cartesian product of min-max and max-min semirings. The elements of Kt are not proper intervals, as it is possible for s to exceed e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Openfst: A general and efficient weighted finite-state transducer library", "authors": [ { "first": "C", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "M", "middle": [], "last": "Riley", "suffix": "" }, { "first": "J", "middle": [], "last": "Schalkwyk", "suffix": "" }, { "first": "W", "middle": [], "last": "Skut", "suffix": "" }, { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2007, "venue": "Proceedings of CIAA", "volume": "", "issue": "", "pages": "11--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Allauzen, M. Riley, J. Schalkwyk, W. Skut, and M. Mohri. 2007. Openfst: A general and efficient weighted finite-state transducer library. In Proceed- ings of CIAA, pages 11-23.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Detecting and extracting named entities from spontaneous speech in a mixed-initiative spoken dialogue context: How may i help you?", "authors": [ { "first": "F", "middle": [], "last": "B\u00e9chet", "suffix": "" }, { "first": "A", "middle": [], "last": "Gorin", "suffix": "" }, { "first": "J", "middle": [], "last": "Wright", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" } ], "year": 2004, "venue": "", "volume": "42", "issue": "", "pages": "207--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. B\u00e9chet, A. Gorin, J. Wright, and D. Hakkani-T\u00fcr. 2004. Detecting and extracting named entities from spontaneous speech in a mixed-initiative spoken dia- logue context: How may i help you? Speech Commu- nication, 42(2):207-225.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cyberpunc: A lightweight punctuation annotation system for speech", "authors": [ { "first": "D", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1998, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "689--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Beeferman, A. Berger, and J. Lafferty. 1998. Cyber- punc: A lightweight punctuation annotation system for speech. In Proceedings of ICASSP, pages 689-692.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Capitalization recovery for text", "authors": [ { "first": "E", "middle": [], "last": "Brown", "suffix": "" }, { "first": "A", "middle": [], "last": "Coden", "suffix": "" } ], "year": 2002, "venue": "Information Retrieval Techniques for Speech Applications", "volume": "", "issue": "", "pages": "11--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Brown and A. Coden. 2002. Capitalization re- covery for text. In Information Retrieval Techniques for Speech Applications, pages 11-22, London, UK. Springer-Verlag.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adaptation of maximum entropy capitalizer: Little data can help a lot", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "A", "middle": [], "last": "Acero", "suffix": "" } ], "year": 2006, "venue": "Computer Speech and Language", "volume": "20", "issue": "4", "pages": "382--399", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Chelba and A. Acero. 2006. Adaptation of maximum entropy capitalizer: Little data can help a lot. Com- puter Speech and Language, 20(4):382-399.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Muc-7 named entity task definition", "authors": [ { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1997, "venue": "Proceedings of MUC-7", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Chinchor. 1997. Muc-7 named entity task definition. In Proceedings of MUC-7.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Punctuation annotation using statistical prosody models", "authors": [ { "first": "H", "middle": [], "last": "Christensen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Gotoh", "suffix": "" }, { "first": "S", "middle": [], "last": "Renals", "suffix": "" } ], "year": 2001, "venue": "ISCA Workshop on Prosody in Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Christensen, Y. Gotoh, and S. Renals. 2001. Punc- tuation annotation using statistical prosody models. In ISCA Workshop on Prosody in Speech Recognition and Understanding.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Restoring punctuation and capitalization in transcribed speech", "authors": [ { "first": "A", "middle": [], "last": "Gravano", "suffix": "" }, { "first": "M", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "M", "middle": [], "last": "Bacchiani", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "4741--4744", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Gravano, M. Jansche, and M. Bacchiani. 2009. Restoring punctuation and capitalization in transcribed speech. In Proceedings of ICASSP, pages 4741-4744. IEEE Computer Society.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Introduction to automata theory, languages, and computation", "authors": [ { "first": "J", "middle": [], "last": "Hopcroft", "suffix": "" }, { "first": "J", "middle": [], "last": "Ullman", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "218--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Hopcroft and J. Ullman, 1979. Introduction to au- tomata theory, languages, and computation, pages 218-219. Addison-Wesley.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Information extraction from voicemail", "authors": [ { "first": "J", "middle": [], "last": "Huang", "suffix": "" }, { "first": "G", "middle": [], "last": "Zweig", "suffix": "" }, { "first": "M", "middle": [], "last": "Padmanabhan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Conference of the ACL", "volume": "", "issue": "", "pages": "290--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Huang, G. Zweig, and M. Padmanabhan. 2001. Infor- mation extraction from voicemail. In Proceedings of the Conference of the ACL, pages 290-297.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Information extraction from voicemail transcripts", "authors": [ { "first": "M", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Jansche and S. P. Abney. 2002. Information extrac- tion from voicemail transcripts. In In EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Measuring the readability of automatic speech-to-text transcripts", "authors": [ { "first": "D", "middle": [], "last": "Jones", "suffix": "" }, { "first": "F", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "E", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "E", "middle": [], "last": "Williams", "suffix": "" }, { "first": "E", "middle": [], "last": "Fedorenko", "suffix": "" }, { "first": "D", "middle": [], "last": "Reynolds", "suffix": "" }, { "first": "M", "middle": [], "last": "Zissman", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EUROSPEECH", "volume": "", "issue": "", "pages": "1585--1588", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Jones, F. Wolf, E. Gibson, E. Williams, E. Fedorenko, D. Reynolds, and M. Zissman. 2003. Measuring the readability of automatic speech-to-text transcripts. In Proceedings of EUROSPEECH, pages 1585-1588.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Aspects of named entity processing", "authors": [ { "first": "M", "middle": [], "last": "Levit", "suffix": "" }, { "first": "P", "middle": [], "last": "Haffner", "suffix": "" }, { "first": "A", "middle": [], "last": "Gorin", "suffix": "" }, { "first": "H", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "E", "middle": [], "last": "N\u00f6th", "suffix": "" } ], "year": 2004, "venue": "Proceedings of INTERSPEECH", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Levit, P. Haffner, A. Gorin, H. Alshawi, and E. N\u00f6th. 2004. Aspects of named entity processing. In Pro- ceedings of INTERSPEECH.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Enriching speech recognition with automatic detection of sentence boundaries and disfluencies", "authors": [ { "first": "Y", "middle": [], "last": "Liu", "suffix": "" }, { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "D", "middle": [], "last": "Hillard", "suffix": "" }, { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "M", "middle": [], "last": "Harper", "suffix": "" } ], "year": 2006, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "14", "issue": "5", "pages": "1526--1540", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Liu, E. Shriberg, A. Stolcke, D. Hillard, M. Ostendorf, and M. Harper. 2006. Enriching speech recognition with automatic detection of sentence boundaries and disfluencies. IEEE Transactions on Audio, Speech, and Language Processing, 14(5):1526-1540.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Speech recognition with weighted finite-state transducers", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "M", "middle": [], "last": "Riley", "suffix": "" } ], "year": 2008, "venue": "Handbook on Speech Processing and Speech Communication", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohri, F. Pereira, and M. Riley. 2008. Speech recog- nition with weighted finite-state transducers. In Hand- book on Speech Processing and Speech Communica- tion. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Weighted automata algorithms", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2009, "venue": "Handbook of Weighted Automata. Monographs in Theoretical Computer Science", "volume": "", "issue": "", "pages": "213--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohri. 2009. Weighted automata algorithms. In Handbook of Weighted Automata. Monographs in The- oretical Computer Science., pages 213-254. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Voicemail corpus part i. Linguistic Data Consortium", "authors": [ { "first": "M", "middle": [], "last": "Padmanabhan", "suffix": "" }, { "first": "G", "middle": [], "last": "Ramaswamy", "suffix": "" }, { "first": "B", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "P", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "C", "middle": [], "last": "Dunn", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Padmanabhan, G. Ramaswamy, B. Ramabhadran, P. Gopalakrishnan, and C. Dunn. 1998. Voicemail corpus part i. Linguistic Data Consortium, Philadel- phia.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Prosody-based automatic segmentation of speech into sentences and topics", "authors": [ { "first": "E", "middle": [], "last": "Shriberg", "suffix": "" }, { "first": "A", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" }, { "first": "G", "middle": [], "last": "T\u00fcr", "suffix": "" } ], "year": 2000, "venue": "Speech Communications", "volume": "32", "issue": "1-2", "pages": "127--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Shriberg, A. Stolcke, D. Hakkani-T\u00fcr, and G. T\u00fcr. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communi- cations, 32(1-2):127-154.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Capitalizing machine translation", "authors": [ { "first": "K", "middle": [], "last": "Wang", "suffix": "" }, { "first": "D", "middle": [], "last": "Knight", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of HLT/ACL", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, K. Knight, and D. Marcu. 2006. Capitaliz- ing machine translation. In Proceedings of HLT/ACL, pages 1-8. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "(a) S FSA (b) W variants FST (c) W out FSA (d) W class FST (e) W best FSA Figure 3: An example showing transducers produced during formatting.", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "W best FST Figure 4: A small example of time preservation section of the algorithm. Arcs with non-unity timing weights show parenthesized pair of start and end time.", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "time is written in a 12-hour system as \"xx:xx\" or \"xx\" \u2022 dollar amounts are written as \"$x,xxx.xx\" with cents included if spoken \u2022 US phone numbers are written as \"(xxx) xxxxxxx\" or \"xxx-xxxx\" \u2022 other phone numbers are written as digit strings \u2022 decimals are written as \"x.x\"", "num": null, "type_str": "figure", "uris": null }, "FIGREF4": { "text": "Punctuation and capitalization results.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "html": null, "text": "Training set TRS and test set PTS.", "type_str": "table", "num": null, "content": "
words commas periods capitals
TRS 176M10.6M11.8M24.3M
PTS7M420K440K880K
" }, "TABREF4": { "html": null, "text": "The count of formatted entities in NTS containing various formatting characters; the counts of these entities correctly formatted by the systems A and B.", "type_str": "table", "num": null, "content": "" } } } }