{ "paper_id": "P98-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:17:24.615939Z" }, "title": "Named Entity Scoring for Speech Input", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Burger", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "" }, { "first": "David", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "palmer@mitre.org" }, { "first": "Lynette", "middle": [], "last": "Hirschman", "suffix": "", "affiliation": { "laboratory": "", "institution": "The MITRE Corporation", "location": { "addrLine": "202 Burlington Road Bedford", "postCode": "01730", "region": "MA", "country": "USA" } }, "email": "lynette@mitre.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the Message Understanding Conference (MUC) Named Entity scoring algorithm, using a comparison based on explicit alignment of the underlying texts, followed by a scoring phase. The scoring procedure maps corresponding tagged regions and compares these according to tag type and tag extent, allowing us to reproduce the MUC Named Entity scoring for identical underlying texts. In addition, the new algorithm scores for content (transcription correctness) of the tagged region, a useful distinction when dealing with noisy data that may differ from a reference transcription (e.g., speech recognizer output). To illustrate the algorithm, we have prepared a small test data set consisting of a careful transcription of speech data and manual insertion of SGML named entity annotation. We report results for this small test corpus on a variety of experiments involving automatic speech recognition and named entity tagging.", "pdf_parse": { "paper_id": "P98-1031", "_pdf_hash": "", "abstract": [ { "text": "This paper describes a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the Message Understanding Conference (MUC) Named Entity scoring algorithm, using a comparison based on explicit alignment of the underlying texts, followed by a scoring phase. The scoring procedure maps corresponding tagged regions and compares these according to tag type and tag extent, allowing us to reproduce the MUC Named Entity scoring for identical underlying texts. In addition, the new algorithm scores for content (transcription correctness) of the tagged region, a useful distinction when dealing with noisy data that may differ from a reference transcription (e.g., speech recognizer output). To illustrate the algorithm, we have prepared a small test data set consisting of a careful transcription of speech data and manual insertion of SGML named entity annotation. We report results for this small test corpus on a variety of experiments involving automatic speech recognition and named entity tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Linguistically annotated training and test corpora are playing an increasingly prominent role in natural language processing research. The Penn TREEBANK and the SUSANNE corpora (Marcus 93, Sampson 95) have provided corpora for part-of-speech taggers and syntactic processing.", "cite_spans": [ { "start": 177, "end": 188, "text": "(Marcus 93,", "ref_id": null }, { "start": 189, "end": 200, "text": "Sampson 95)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction: The Problem", "sec_num": "1." }, { "text": "The Message Understanding Conferences (MUCs) and the Tipster program have provided corpora for newswire data annotated with named entities ~ in multiple languages (Merchant 96) , as well as for higher level relations extracted from text. The value of these corpora depends critically on the ability to evaluate hypothesized annotations against a gold standard reference or key. To date, scoring algorithms such as the MUC Named Entity scorer (Chinchor 95) have assumed that the documents to be compared differ only in linguistic annotation, not in the underlying text. 2 This has precluded applicability to data derived from noisy sources. For example, if we want to compare named entity (NE) processing for a broadcast news source, created via automatic speech recognition and NE tagging, we need to compare it to data created by careful human transcription and manual NE tagging.. But the underlying texts--the recogmzer output and the gold standard transcription--differ, and the MUC algorithm cannot be used. Example 1 shows the reference transcription from a broadcast news source, and below it, the transcription produced by an automatic speech recognition system. The excerpt also includes reference and hypothesis NE annotation, in the form of SGML tags, where
tags indicate the name of a person, . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Procedure",
"sec_num": "2."
},
{
"text": "A key component of the scoring process is the actual alignment of individual lexemes in the reference and hypothesis documents. This task is similar to the alignment that is used to evaluate word error rates of speech recognizers: we match lexemes in the hypothesis text with their corresponding lexemes in the reference text. The standard alignment algorithm used for word error evaluation is a component of the NIST SCLite scoring package used in the Broadcast News evaluations (Garofolo 97) . For each lexeme, it provides four possible classifications of the alignment: correct, substitution, insertion, and deletion. This classification has been successful for evaluating word error. However, it restricts alignment to a one-to-one mapping between hypothesis and reference texts. It is very common for multiple lexemes in one text to correspond to a single lexeme in the other, in addition to multiple-to-multiple correspon- annotation and all punctuation is removed, and all remaining text is converted to upper-case. Each word in the reference text is then assigned an estimated timestamp based on the explicit timestamp of the larger parent segmentJ Given the sequence of all the timestamped words in each file, a coarse segmentation and alignment is performed to assist the lexeme alignment in Stage 2. This is done by identifying sequences of three or more identical words in the reference and hypothesis transcriptions, transforming the long sequence into a set of shorter sequences, each with possible mismatches. Lexeme alignment is then performed on these short sequences .6 4http://www.ldc.upenn.edu/ 5It should be possible to provide more accurate word timestamps by using a large-vocabulary recognizer to provide a forced alignment on the clean transcription. 6The sequence length is dependent on the word-eror rate of the recognizer ouput, but in general the average sequence is 20-30 words long after this coarse segmentation.",
"cite_spans": [
{
"start": 480,
"end": 493,
"text": "(Garofolo 97)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 2: Lexeme Alignment",
"sec_num": "2.2"
},
{
"text": "dences. For example, compare New York and Newark in Example 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 2: Lexeme Alignment",
"sec_num": "2.2"
},
{
"text": "Capturing these alignment possibilities is especially important in evaluating NE performance, since the alignment facilitates phrase mapping and comparison of tagged regions. In the current implementation of our scoring algorithm, the alignment is done using a phonetic alignment algorithm (Fisher 93) . In direct comparison with the standard alignment algorithm in the SCLite package, we have found that the phonetic algorithm results in more intuitive results. This can be seen clearly in Example 2, which repeats the reference and hypothesis texts of the previous example. The top alignment is that produced by the SCLite algorithm; the bottom by the phonetic algorithm. Since this example contains several instances of potential named entities, it also illustrates the impact of different alignment algorithms (and alignment errors) on phrase mapping and comparison. We will compare the effect of the two algorithms on the NE score in Section 3. Even the phonetic algorithm makes alignment mistakes. This can be seen in Example 3, where, as before, SCLite's alignment is shown above that of the phonetic algorithm. Once again, we judge the latter to be a more intuituive alignment--nonetheless, OTTAWA would arguably align better with the three word sequence LOT OF WHAT. As we shall see, these potential misalignments are taken into account in the algorithm's mapping and comparison phases.",
"cite_spans": [
{
"start": 290,
"end": 301,
"text": "(Fisher 93)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 2: Lexeme Alignment",
"sec_num": "2.2"
},
{
"text": "The result of the previous phase is a series of alignments between the words in the reference text and those in a recognizer's hypothesis. In both of these texts there is named-entity (NE) markup. The next phase is to map the reference NEs to the hypothesis NEs. The result of this will be corresponding pairs of reference and hypothesis phrases, which will be compared for correctness in Stage 4. Currently, the scorer uses a simple, greedy mapping algorithm to find corresponding NE pairs. Potential mapped pmrs are those that overlap--that is, if some word(s) in a hypothesis NE have been aligned with some word(s) in a reference NE, the reference and hypothesis NEs may be mapped to one another. If more than one potential mapping is possible, this is currently resolved in simple left-to-right fashion: the first potential mapping pair is chosen. A more sophisticated algorithm, such as that used in the MUC scorer, will eventually be used that attempts to optimize the pairings, in order to give the best possible final score. In the general case, there will be reference NEs that do not map to any hypothesis NE, and vice versa. As we shall see below, the unmapped reference NEs are completely missing from the hypothesis, and thus will correspond to recall errors. Similarly, unmapped hypothesis NEs are completely spurious: they precision errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 3: Mapping",
"sec_num": "2.3"
},
{
"text": "Once the mapping phase reference-hypothesis NEs, compared for correctness. will be scored as has found pairs of these pa~rs are As indicated above, we compare along three independent components: type, extent and content. The first two components correspond to MUC scoring and preserve backward compatibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "Thus our",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "FROM OTTAWA THIS IS WHAT THIS 'i IS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "algorithm can be used to generate MUC-style NE scores, given two texts that differ only in annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "Type is the simplest of the three components: A hypothesis type is correct only if it is the same as the corresponding reference typer. Thus, in Example 4, hypothesis 1 has an incorrect type, while hypothesis 2 is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "Extent comparison makes further use of the information from the alignment phase. Strict extent comparison requires the first word of the hypothesis NE to align with the first word of the reference NE, and similarly for the last word. Thus, in Example 4, hypotheses 1 and 2 are correct in extent, while hypotheses 3 and 4 are not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "Note that in hypotheses 2 and 4 the alignment phase has indicated a split between the single reference word GINGRICH and the two hypothesis words GOOD RICH (that is, there is a one-to two-word alignment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": "In contrast, hypothesis 3 shows the alignment produced by SCLite, which allows only one-to-one alignment. In this case, just as in Example 4, extent is judged to be incorrect, since the final words of the reference and hypothesis NEs do not align. This strict extent comparison can be weakened by adjusting an extent tolerance. This is defined as the degree to which the first and/or last word of the hypothesis need not align exactly with the corresponding word of the reference NE. For example, if the extent tolerance is 1, then hypotheses 3 and 4 would both be correct in the extent component. The main reason for a nonzero tolerance is to allow for possible discrepancies in the lexeme alignment process-thus the tolerance only comes into play if there are word errors adjacent to the boundary in question (either the beginning or end of the NE). Here, because both GOOD and RICH are errors, hypotheses 3, 4 and 6 are given the benefit of the doubt when the extent tolerance is 1. For",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stage 4: Comparison",
"sec_num": "2.4"
},
{
"text": " NEWT \"GINGRiCH \" PHILIPBOROFF
",
"num": null,
"text": "ATTHE <L> MISSISSIPPI </L> REPUBLICAN hyp:ATTHE <L> NEWARK </L> BASK ON FILM FORUM MISSES THE REPUBLICAN Example 1: The algorithm takes three files as input: the human-transcribed reference file with key NE phrases, the speech recognizer output, which includes coarse-grained timestamps used in the alignment process, and the recogizer output tagged with NE mark-up. The first phase of the scoring algorithm involves reformatting these input files to allow direct comparison of the raw text. This is necessary be- cause the transcript file and the output of the speech recognizer may contain information in addition to the lexemes. For example, for the Broadcast News corpus provided by the Lin- guistic Data Consortium, 4 the transcript file contains, in addition to mixed-case text rep- resenting the words spoken, extensive SGML and pseudo-SGML annotation including seg- ment timestamps, speaker identification, back- ground noise and music conditions, and comments. In the preprocessing phase, this ref: AT THE NEW YORK DESK I'M PHILIP BOROFF hyp: AT THE NEWARK BASK ON FILM FORUM MISSES