{ "paper_id": "X93-1016", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:05:52.469058Z" }, "title": "TIPSTER/MUC-5 INFORMATION EXTRACTION SYSTEM EVALUATION", "authors": [ { "first": "Beth", "middle": [ "M" ], "last": "Sundheim", "suffix": "", "affiliation": { "laboratory": "", "institution": "Naval Command", "location": { "postCode": "44208, 92152-7420", "settlement": "Control, Code, San Diego", "region": "CA" } }, "email": "sundheim@nose.rail" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "X93-1016", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Three information extraction system evaluations using Tipster data were conducted in the context of Phase 1 of the Tipster Text program. Interim evaluations were conducted in September, 1992, and February, 1993 ; the final evaluation was conducted in July, 1993. The final evaluation included not only the Tipster-supported inform~on extraction contractors but thirteen other participants as well. This evaluation was the topic of the Fifth Message Understanding Conference (MUC-5) in August, 1993. With particular respect to the research and development tasks of the Tipster contractors, the goal of these evaluations has been to assess success in terms of the development of systems to work in both English and Japanese (BBN, GE/CMU, and NMSU/Brandeis) and/or in both the joint ventures and microelectronics domains (BBN, GE/CMU, NMSU/Brandeis, and UMass/Hughes).", "cite_spans": [ { "start": 175, "end": 195, "text": "September, 1992, and", "ref_id": null }, { "start": 196, "end": 210, "text": "February, 1993", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The methodology associated with these evaluations has been under development since 1987, when the series of Message Understanding Conferences began. The evaluations have pushed technology to handle the recurring language problems found in sizeable samples of naturallyoccuring text. Designing the evaluations around an information extraction application of text processing technology has made it possible to discuss NLP techniques at a practical level and to gain insight into the capabilities of complex systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "However, any such evaluation testbed application will undoubtedly differ in important respects from a real-life application. Thus, there is only an indirect connection between the evaluation results for a system and the suitability of applying the system to performance of a task in an operational setting. A fairly large number of metrics have been defined that respond to the variety of subtasks inherent in information extraction and the varying perspectives of evaluation consumers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The evaluations measure coverage, accuracy, and classes of error on each language-domain pair, independently of all other language-domain pairs that the system may be tested on. With its dual language and domain requirements and challenging task definition, Tipster Phase 1 pushed especially hard on issues such as portability tools, languageand domain-independent architectures and algorithms, and system efficiency. These aspects of software were not directly evaluated, although information concerning some or all of them may be found in the papers prepared by the evaluation participants,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The Tipster contractors were allowed access to the training corpus (articles and hand-coded templates for a given language-domain pair) and associated materials (documentation, software resources, lexical resources) as they were being prepared over the course of Phase 1. The articles and corresponding hand-coded templates from the test corpus were held in reserve for use as blind-test materials during evaluation periods; new test sets were used for each evaluation. A description of the training and test corpora is contained in [1] . Those MUC-5 evaluation participants who were not Tipster contractors were allowed access to training materials in March, 1993, when major updates resulting from decisions made at the Tipster interim evaluation in February had been completed and permission for MUC-5 participants to use most of the copyrighted articles had been obtained. Table 1 identifies the MUC-5 evaluation participants and the language-domain pairs on which their systems were evaluated.", "cite_spans": [ { "start": 533, "end": 536, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 877, "end": 884, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "THE EVALUATION PROCESS", "sec_num": null }, { "text": "The evaluation participants (Tipster and non-Tipster) were also provided with evaluation software, prepared via NRaD contract to SAIC, to help them monitor the performance benefits of alternative software solutions they were exploring in their research [9] . The evaluation software, corpora, documentation, and miscellaneous other resources were distributed primarily through electronic mail and electronic fde transfer. Virtually every item was updated numerous times, and updates continued on some of them right up to the start of final testing. Personnel at the Consortium for Lexical Research (New Mexico State University) and the Institute for Defense Analyses played critical roles in making these materials available for electronic transfer, At the start of the test week for each evaluation, the participants were supplied electronically with encoded test sets of articles, which they were to decode only when they were ready to begin testing. Testing was conducted by the participants at their own sites in accordance with a strict test protocol. After their systems processed the texts and produced the extracted information in the expected template format, the participants Table 2 . Tipster Phase i extraction system evaluations l,,Core-refers to a core set of JV template slots: the