{ "paper_id": "P98-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:17:24.615939Z" }, "title": "Named Entity Scoring for Speech Input", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Burger", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "" }, { "first": "David", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "palmer@mitre.org" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "lynette@mitre.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the Message Understanding Conference (MUC) Named Entity scoring algorithm, using a comparison based on explicit alignment of the underlying texts, followed by a scoring phase. The scoring procedure maps corresponding tagged regions and compares these according to tag type and tag extent, allowing us to reproduce the MUC Named Entity scoring for identical underlying texts. In addition, the new algorithm scores for content (transcription correctness) of the tagged region, a useful distinction when dealing with noisy data that may differ from a reference transcription (e.g., speech recognizer output). To illustrate the algorithm, we have prepared a small test data set consisting of a careful transcription of speech data and manual insertion of SGML named entity annotation. We report results for this small test corpus on a variety of experiments involving automatic speech recognition and named entity tagging.", "pdf_parse": { "paper_id": "P98-1031", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the Message Understanding Conference (MUC) Named Entity scoring algorithm, using a comparison based on explicit alignment of the underlying texts, followed by a scoring phase. The scoring procedure maps corresponding tagged regions and compares these according to tag type and tag extent, allowing us to reproduce the MUC Named Entity scoring for identical underlying texts. In addition, the new algorithm scores for content (transcription correctness) of the tagged region, a useful distinction when dealing with noisy data that may differ from a reference transcription (e.g., speech recognizer output). To illustrate the algorithm, we have prepared a small test data set consisting of a careful transcription of speech data and manual insertion of SGML named entity annotation. We report results for this small test corpus on a variety of experiments involving automatic speech recognition and named entity tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistically annotated training and test corpora are playing an increasingly prominent role in natural language processing research. The Penn TREEBANK and the SUSANNE corpora (Marcus 93, Sampson 95) have provided corpora for part-of-speech taggers and syntactic processing.", "cite_spans": [ { "start": 177, "end": 188, "text": "(Marcus 93,", "ref_id": null }, { "start": 189, "end": 200, "text": "Sampson 95)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction: The Problem", "sec_num": "1." }, { "text": "The Message Understanding Conferences (MUCs) and the Tipster program have provided corpora for newswire data annotated with named entities ~ in multiple languages (Merchant 96) , as well as for higher level relations extracted from text. The value of these corpora depends critically on the ability to evaluate hypothesized annotations against a gold standard reference or key. To date, scoring algorithms such as the MUC Named Entity scorer (Chinchor 95) have assumed that the documents to be compared differ only in linguistic annotation, not in the underlying text. 2 This has precluded applicability to data derived from noisy sources. For example, if we want to compare named entity (NE) processing for a broadcast news source, created via automatic speech recognition and NE tagging, we need to compare it to data created by careful human transcription and manual NE tagging.. But the underlying texts--the recogmzer output and the gold standard transcription--differ, and the MUC algorithm cannot be used. Example 1 shows the reference transcription from a broadcast news source, and below it, the transcription produced by an automatic speech recognition system. The excerpt also includes reference and hypothesis NE annotation, in the form of SGML tags, where

tags indicate the name of a person, that of a location, and an organization) We have developed a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the MUC algorithm, using a comparison based on explicit alignment of the underlying texts. The scoring procedure then maps corresponding tagged regions and compares these according to tag type and tag extent. These correspond to the components currently used by the MUC scoring algorithm. In addition, the new algorithm also compares the content of the tagged region, measuring correctness of the transcription within the region, when working with noisy data (e.g., recognizer output).", "cite_spans": [ { "start": 163, "end": 176, "text": "(Merchant 96)", "ref_id": null }, { "start": 442, "end": 455, "text": "(Chinchor 95)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction: The Problem", "sec_num": "1." }, { "text": "The scoring algorithm proceeds in five stages: 1. Preprocessing to prepare data for alignment 2. Alignment of lexemes in the reference and hypothesis files 3. Named entity mapping to determine corresponding phrases in the reference and hypothesis files 4. Comparison of the mapped entities in terms of tag type, tag extent and tag content 5. Final computation of the score t MUC \"named entities\" include person, organization and location names, as well as numeric expressions. -'Indeed, the Tipster scoring and annotation algorithms require, as part of the Tipster architecture, that the annotation preserve the underlying text including white space. The MUC named entity scoring algorithm uses character offsets to compare the mark-up of two texts. 3The SGML used in Tipster evaluations is actually more explicit than that used in this paper, e.g., rather than

. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Procedure", "sec_num": "2." }, { "text": "A key component of the scoring process is the actual alignment of individual lexemes in the reference and hypothesis documents. This task is similar to the alignment that is used to evaluate word error rates of speech recognizers: we match lexemes in the hypothesis text with their corresponding lexemes in the reference text. The standard alignment algorithm used for word error evaluation is a component of the NIST SCLite scoring package used in the Broadcast News evaluations (Garofolo 97) . For each lexeme, it provides four possible classifications of the alignment: correct, substitution, insertion, and deletion. This classification has been successful for evaluating word error. However, it restricts alignment to a one-to-one mapping between hypothesis and reference texts. It is very common for multiple lexemes in one text to correspond to a single lexeme in the other, in addition to multiple-to-multiple correspon- annotation and all punctuation is removed, and all remaining text is converted to upper-case. Each word in the reference text is then assigned an estimated timestamp based on the explicit timestamp of the larger parent segmentJ Given the sequence of all the timestamped words in each file, a coarse segmentation and alignment is performed to assist the lexeme alignment in Stage 2. This is done by identifying sequences of three or more identical words in the reference and hypothesis transcriptions, transforming the long sequence into a set of shorter sequences, each with possible mismatches. Lexeme alignment is then performed on these short sequences .6 4http://www.ldc.upenn.edu/ 5It should be possible to provide more accurate word timestamps by using a large-vocabulary recognizer to provide a forced alignment on the clean transcription. 6The sequence length is dependent on the word-eror rate of the recognizer ouput, but in general the average sequence is 20-30 words long after this coarse segmentation.", "cite_spans": [ { "start": 480, "end": 493, "text": "(Garofolo 97)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Stage 2: Lexeme Alignment", "sec_num": "2.2" }, { "text": "dences. For example, compare New York and Newark in Example 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 2: Lexeme Alignment", "sec_num": "2.2" }, { "text": "Capturing these alignment possibilities is especially important in evaluating NE performance, since the alignment facilitates phrase mapping and comparison of tagged regions. In the current implementation of our scoring algorithm, the alignment is done using a phonetic alignment algorithm (Fisher 93) . In direct comparison with the standard alignment algorithm in the SCLite package, we have found that the phonetic algorithm results in more intuitive results. This can be seen clearly in Example 2, which repeats the reference and hypothesis texts of the previous example. The top alignment is that produced by the SCLite algorithm; the bottom by the phonetic algorithm. Since this example contains several instances of potential named entities, it also illustrates the impact of different alignment algorithms (and alignment errors) on phrase mapping and comparison. We will compare the effect of the two algorithms on the NE score in Section 3. Even the phonetic algorithm makes alignment mistakes. This can be seen in Example 3, where, as before, SCLite's alignment is shown above that of the phonetic algorithm. Once again, we judge the latter to be a more intuituive alignment--nonetheless, OTTAWA would arguably align better with the three word sequence LOT OF WHAT. As we shall see, these potential misalignments are taken into account in the algorithm's mapping and comparison phases.", "cite_spans": [ { "start": 290, "end": 301, "text": "(Fisher 93)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Stage 2: Lexeme Alignment", "sec_num": "2.2" }, { "text": "The result of the previous phase is a series of alignments between the words in the reference text and those in a recognizer's hypothesis. In both of these texts there is named-entity (NE) markup. The next phase is to map the reference NEs to the hypothesis NEs. The result of this will be corresponding pairs of reference and hypothesis phrases, which will be compared for correctness in Stage 4. Currently, the scorer uses a simple, greedy mapping algorithm to find corresponding NE pairs. Potential mapped pmrs are those that overlap--that is, if some word(s) in a hypothesis NE have been aligned with some word(s) in a reference NE, the reference and hypothesis NEs may be mapped to one another. If more than one potential mapping is possible, this is currently resolved in simple left-to-right fashion: the first potential mapping pair is chosen. A more sophisticated algorithm, such as that used in the MUC scorer, will eventually be used that attempts to optimize the pairings, in order to give the best possible final score. In the general case, there will be reference NEs that do not map to any hypothesis NE, and vice versa. As we shall see below, the unmapped reference NEs are completely missing from the hypothesis, and thus will correspond to recall errors. Similarly, unmapped hypothesis NEs are completely spurious: they precision errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 3: Mapping", "sec_num": "2.3" }, { "text": "Once the mapping phase reference-hypothesis NEs, compared for correctness. will be scored as has found pairs of these pa~rs are As indicated above, we compare along three independent components: type, extent and content. The first two components correspond to MUC scoring and preserve backward compatibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "Thus our", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "FROM OTTAWA THIS IS WHAT THIS 'i IS", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "algorithm can be used to generate MUC-style NE scores, given two texts that differ only in annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "Type is the simplest of the three components: A hypothesis type is correct only if it is the same as the corresponding reference typer. Thus, in Example 4, hypothesis 1 has an incorrect type, while hypothesis 2 is correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "Extent comparison makes further use of the information from the alignment phase. Strict extent comparison requires the first word of the hypothesis NE to align with the first word of the reference NE, and similarly for the last word. Thus, in Example 4, hypotheses 1 and 2 are correct in extent, while hypotheses 3 and 4 are not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "Note that in hypotheses 2 and 4 the alignment phase has indicated a split between the single reference word GINGRICH and the two hypothesis words GOOD RICH (that is, there is a one-to two-word alignment).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "In contrast, hypothesis 3 shows the alignment produced by SCLite, which allows only one-to-one alignment. In this case, just as in Example 4, extent is judged to be incorrect, since the final words of the reference and hypothesis NEs do not align. This strict extent comparison can be weakened by adjusting an extent tolerance. This is defined as the degree to which the first and/or last word of the hypothesis need not align exactly with the corresponding word of the reference NE. For example, if the extent tolerance is 1, then hypotheses 3 and 4 would both be correct in the extent component. The main reason for a nonzero tolerance is to allow for possible discrepancies in the lexeme alignment process-thus the tolerance only comes into play if there are word errors adjacent to the boundary in question (either the beginning or end of the NE). Here, because both GOOD and RICH are errors, hypotheses 3, 4 and 6 are given the benefit of the doubt when the extent tolerance is 1. For", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 4: Comparison", "sec_num": "2.4" }, { "text": "

NEWT \"GINGRiCH \"

hypothesis 5, however, extent is judged to be incorrect, no matter what the extent tolerance is, due to the lack of word errors adjacent to the boundaries of the entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "Content is the score component closest to the standard measures of word error. Using the word alignment information from the earlier phase, a region of intersection between the reference and the hypothesis text is computed, and there must be no word errors in this region. That is, each hypothesis word must align with exactly one reference word, and the two must be identical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "The intuition behind using the intersection or overlap region is that otherwise extent errors would be penalized twice. Thus in hypothesis6, even though NEWT is in the reference NE, the substitution error (NEW) does not count with respect to content comparison, because only the region containing GINGRICH is examined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "Note that the extent tolerance described above is not used to determine the region of intersection. Table 1 shows the score results for each of these score components on all six of the hypotheses in Example 4. The extent component is shown for two different thresholds, 0 and 1 (the latter being the default setting in our implementation).", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 107, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Ref:", "sec_num": null }, { "text": "After the mapped pairs are compared along all three components, a final score is computed. We use precision and recall, in order to distinguish between errors of commission (spurious responses) and those of omission (missing responses). For a particular pair of reference and hypothesis NE compared in the previous phase, each component that is incorrect is a substitution error, counting against both recall and precision, because a required reference element was missing, and a spurious hypothesis element was present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 5: Final Computation", "sec_num": "2.5" }, { "text": "Each of the reference NEs that was not mapped to a hypothesis NE in the mapping phase also contributes errors: one recall error for each score component missing from the hypothesis text. Similarly, an unmapped hypothesis NE is completely spurious, and thus contributes three precision errors: one for each of the score components. Finally, we combine the precision and recall scores into a balanced F-measure. This is a combination of precision and recall, such that F--2PR /(P + R). F-measure is a single metric, a convenient way to compare systems or texts along one dimension 7. Table 1 3", "cite_spans": [], "ref_spans": [ { "start": 582, "end": 589, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Stage 5: Final Computation", "sec_num": "2.5" }, { "text": "To validate our scoring algorithm, we developed a small test set consisting of the Broadcast News development test for the 1996 HUB4 evaluation (Garofolo 97) .", "cite_spans": [ { "start": 144, "end": 157, "text": "(Garofolo 97)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ". Experiments and Results", "sec_num": null }, { "text": "The reference transcription (179,000 words) was manually annotated with NE information (6150 entities).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Experiments and Results", "sec_num": null }, { "text": "We then performed a number of scoring experiments on two sets of transcription/NE hypotheses generated automatically from the same speech data. The first data that we scored was the result of a commonly available speech recognition system, which was then automatically tagged for NE by our system Alembic (Aberdeen 95) . The second set of data that was scored was made availabe to us by BBN, and was the result of the BYBLOS speech recognizer and IdentiFinder TM NE extractor (Bikel 97, Kubala 97, 98) . In both cases, the NE taggers were run on the reference transcription as well as the corresponding recognizer's output. These data were scored using the original MUC scorer as well as our own scorer run in two modes: the three-component mode described above, with an extent threshold of 1, and a \"MUC mode\", intended to be backwardcompatible with the MUC scorer, s We show the results in Table 2 . First, we note that when the underlying texts are identical, (columns A and I) our new scoring algorithm in MUC mode produces the same result as the MUC scorer. In normal mode, the scores for the reference text are, of course, higher, because there are no content errors. Not surprisingly, we note lower NE performance on recognizer output.", "cite_spans": [ { "start": 305, "end": 318, "text": "(Aberdeen 95)", "ref_id": null }, { "start": 476, "end": 486, "text": "(Bikel 97,", "ref_id": null }, { "start": 487, "end": 497, "text": "Kubala 97,", "ref_id": null }, { "start": 498, "end": 501, "text": "98)", "ref_id": null } ], "ref_spans": [ { "start": 892, "end": 899, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": ". Experiments and Results", "sec_num": null }, { "text": "Interestingly, for both the Alembic system (S+A) and the BBN system that counts substitution errors only once. SOur scorer is configurable in a variety of ways. In particular, the extent and content components can be combined into a single component, which is judged to be correct only if the individual extent and content are correct. In this mode, and with the extent threshold described above set to zero, the scorer effectively replicates the MUC algorithm. Table 2 (B+I), the degradation is less than we might expect: given the recognizer word error rates shown, one might predict that the NE performance on recognizer output would be no better than the NE performance on the reference text times the word recognition rate. One might thus expect scores around 0.31 (i.e., 0.65x0.47) for the Alembic system and 0.68 (i.e., 0.85\u00d70.80) for the BBN system. However, NE performance is well above these levels for both systems, in both scoring modes. We also wished to determine how sensitive the NE score was to the alignment phase. To explore this, we compared the SCLite and phonetic alignment algorithms, run on the S+A data, with increasing levels of extent tolerance, as shown in Table 3 . As we expected, the NE scores converged as the extent tolerance was relaxed. This suggests that in the case where a phonetic alignment algorithm is unavailable (as is currently the case for languages other than English), robust scoring results might still be achieved by relaxing the extent tolerance.", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 469, "text": "Table 2", "ref_id": null }, { "start": 1185, "end": 1192, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": ". Experiments and Results", "sec_num": null }, { "text": "We have generalized the MUC text-based named entity scoring procedure to handle non-identical underlying texts. Our algorithm can also be used to score other kinds of non-embedded SGML mark-up, e.g., part-of-speech, word segmentation or noun-and verb-group. Despite its generality, the algorithm is backwardcompatible with the original MUC algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": "The distinction made by the algorithm between extent and content allows speech understanding systems to achieve a partial score on the basis of identifying a region as containing a name, even if the recognizer is unable to correctly identify the content words. Encouraging this sort of partial correctness is important because it allows for applications that might, for example, index radio or video broadcasts using named entities, allowing a user to replay a particular region in order to listen to the corresponding content. This flexibility also makes it possible to explore information sources such as prosodics for identifying regions of interest even when it may Table 3 be difficult to achieve a completely correct transcript, e.g., due to novel words.", "cite_spans": [], "ref_spans": [ { "start": 670, "end": 677, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "4." } ], "back_matter": [ { "text": "Our thanks go to BBN/GTE for providing comparative data for the experiments duscussed in Section 3, as well as fruitful discussion of the issues involved in speech understanding metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "MITRE: Description of the Alembic System as Used for MUC-6", "authors": [ { "first": "J", "middle": [], "last": "Aberdeen", "suffix": "" }, { "first": "J", "middle": [], "last": "Burger", "suffix": "" }, { "first": "D", "middle": [], "last": "Day", "suffix": "" }, { "first": "L", "middle": [], "last": "Hirschman", "suffix": "" }, { "first": "P", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "M", "middle": [], "last": "Vilain", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Sixth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Aberdeen, J. Burger, D. Day, L. Hirschman, P. Robinson , M. Vilain (1995). \"MITRE: Description of the Alembic System as Used for MUC-6\", in Proceed- ings of the Sixth Message Understanding Conference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "NYMBLE: A High-Performance Learning Name-finder", "authors": [ { "first": "D", "middle": [], "last": "Bikel", "suffix": "" }, { "first": "S", "middle": [], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Bikel, S. Miller, R. Schwartz, R. Weischedel (1997). \"NYMBLE: A High-Performance Learning Name-finder\", in Proceedings of the Fifth Conference on Applied Natural Language Processing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "MUC-5 Evaluation Metrics", "authors": [ { "first": "N", "middle": [], "last": "Chinchor", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the Fifth Message Understanding Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Chinchor (1995). \"MUC-5 Evaluation Metrics\", in Proceedings of the Fifth Message Understanding Confer- ence.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Better Alignment Procedures for Speech Recognition Evaluation", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Fisher", "suffix": "" }, { "first": "J", "middle": [ "G" ], "last": "Fiscus", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.M. Fisher, J.G. Fiscus (1993). \"Better Alignment Procedures for Speech Recognition Evaluation\".", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Design and Preparation of the 1996 Hub-4 Broadcast News Benchmark Test Corpora", "authors": [ { "first": "J", "middle": [], "last": "Garofolo", "suffix": "" }, { "first": "J", "middle": [], "last": "Fiscus", "suffix": "" }, { "first": "W", "middle": [], "last": "Fisher", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 1997 DARPA Speech Recognition Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Garofolo, J. Fiscus, W. Fisher (1997) \"Design and Preparation of the 1996 Hub-4 Broadcast News Bench- mark Test Corpora\", in Proceedings of the 1997 DARPA Speech Recognition Workshop.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The 1996 BBN Byblos Hub-4 Transcription System", "authors": [ { "first": "F", "middle": [], "last": "Kubala", "suffix": "" }, { "first": "H", "middle": [], "last": "Jin", "suffix": "" }, { "first": "S", "middle": [], "last": "Matsoukas", "suffix": "" }, { "first": "L", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 1997 DARPA Speech Recognition Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Kubala, H. Jin, S. Matsoukas, L. Nguyen, R. Schwartz, J. Makhoul (1997) \"The 1996 BBN Byblos Hub-4 Transcription System\", in Proceedings of the 1997 DARPA Speech Recognition Workshop.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Named Entity Extraction from Speech", "authors": [ { "first": "F", "middle": [], "last": "Kubala", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "R", "middle": [], "last": "Stone", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Broadcast News Transcription co~l Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Kubala, R. Schwartz, R. Stone, R. Weischedel (1998) \"Named Entity Extraction from Speech\", in Proceedings of the Broadcast News Transcription co~l Understanding Workshop.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Performance Measures for Information Extraction", "authors": [ { "first": "J", "middle": [], "last": "Makhoul", "suffix": "" }, { "first": "F", "middle": [], "last": "Kubala", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Makhoul, F. Kubala, R. Schwartz (1998) \"Performance Measures for Information Extraction\". unpublished manuscript, BBN Technologies, GTE In- ternetworking.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Building a large annotated corpus of English: the Penn Treebank", "authors": [ { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "S", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Marcus, S. Santorini, M. Marcinkiewicz (1993) \"Building a large annotated corpus of English: the Penn Treebank\", Computational Linguistics, 19(2).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Multilingual Entity Task (MET) Overview", "authors": [ { "first": "R", "middle": [], "last": "Merchant", "suffix": "" }, { "first": "M", "middle": [], "last": "Okurowski", "suffix": "" } ], "year": 1996, "venue": "Proceedings of TIPSTER Text Program (Phase I1)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Merchant, M. Okurowski (1996) \"The Multilingual Entity Task (MET) Overview\", in Proceedings of TIPSTER Text Program (Phase I1).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "AT THE NEW YORK DESK I'M PHILIP BOROFF MISSISSIPPI REPUBLICAN hyp: At\" THE N~-~/~U< BASK ON FILM FORUM MISSES THE REPUBLICAN Example 2: SCLite alignment (top) vs. phonetic alignment (bottom)", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "7Because F-measure combines recall and precision, it effectively counts substitution errors twice.Makhoul et al. (1998) have proposed an alternate slot error metric", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "content": "
<L>MISSISSIPPI</L> REPUBLICAN
hyp:ATTHE<L>NEWARK</L> BASK ONFILMFORUMMISSES THEREPUBLICAN
Example 1: The algorithm takes three files as input: the
human-transcribed reference file with key NE
phrases, the speech recognizer output, which
includes coarse-grained timestamps used in the
alignment process, and the recogizer output
tagged with NE mark-up.
The first phase of the scoring algorithm involves
reformatting these input files to allow direct
comparison of the raw text. This is necessary be-
cause the transcript file and the output of the
speech recognizer may contain information in
addition to the lexemes. For example, for the
Broadcast News corpus provided by the Lin-
guistic Data Consortium, 4 the transcript file
contains, in addition to mixed-case text rep-
resenting the words spoken, extensive SGML
and pseudo-SGML annotation including seg-
ment timestamps, speaker identification, back-
ground noise and music conditions, and
comments.In the preprocessing phase, this
ref:AT THENEWYORKDESKI'MPHILIPBOROFF
hyp: AT THENEWARKBASKONFILMFORUMMISSES
", "num": null, "text": "ATTHE NEW YORK DESK I'M

PHILIPBOROFF

", "type_str": "table", "html": null } } } }