{ "paper_id": "X93-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:05:52.768022Z" }, "title": "DICTIONARY CONSTRUCTION BY DOMAIN EXPERTS", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "postCode": "01003", "settlement": "Amherst", "region": "MA" } }, "email": "" }, { "first": "Wendy", "middle": [ "G" ], "last": "Lehnert", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "postCode": "01003", "settlement": "Amherst", "region": "MA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.", "pdf_parse": { "paper_id": "X93-1023", "_pdf_hash": "", "abstract": [ { "text": "Sites participating in the recent message understanding conferences have increasingly focused their research on developing methods for automated knowledge acquisition and tools for human-assisted knowledge engineering. However, it is important to remember that the ultimate users of these tools will be domain experts, not natural language processing researchers. Domain experls have extensive knowledge about the task and the domain, but will have little or no background in linguistics or text processing. Tools that assume familiarity with computational linguistics will be of limited use in practical development scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "To investigate practical dictionary construction, we conducted an experiment with government analysts. We wanted to demonstrate that domain experts with no background in text processing could successfully use the AutoSlog dictionary construction tool [Riloff 1993] . We compared the dictionaries constructed by the government analysts with a dictionary constructed by a UMass researcher. The results of the experiment suggest that domain experts can successfully use AutoSlog with only minimal training and achieve performance levels comparable to NLP researchers. AutoSlog is a system that automatically constructs a dictionary for information extraction tasks. Given a training corpus, AutoSlog proposes domain-specific concept node definitions that CIRCUS [Lehnert 1991] uses to extract information from text. However, many of the definitions proposed by AutoSlog should not be retained in the permanent dictionary because they are useless or too risky. We therefore rely on a human-inthe-loop to manually skim the definitions proposed by AutoSlog and separate the good ones from the bad ones. Figure 1 shows a snapshot of the AutoSlog interface used to review potential dictionary entries.", "cite_spans": [ { "start": 251, "end": 264, "text": "[Riloff 1993]", "ref_id": null }, { "start": 759, "end": 773, "text": "[Lehnert 1991]", "ref_id": null } ], "ref_spans": [ { "start": 1097, "end": 1105, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Two government analysts agreed to be the subjects of our experiment. Both analysts had generated templates for the joint ventures domain, so they were experts with the EJV domain and the template-filling task. Neither analyst had any background in linguistics or text processing and had no previous experience with our system. Before they began using the AutoSlog interface, we gave them a 1.5 hour tutorial to explain how AutoSlog works and how to use the interface. The tutorial included some examples to highlight important issues and general decision-making advice. Finally, we gave each analyst a set of 1575 concept node definitions to review. These included definitions to extract 8 types of information: jv-enfities, facilities, person names, product/service descriptions, ownership percentages, total revenue amounts, revenue rate amounts, and ownership capitalization amounts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We did not give the analysts all of the concept node definitions proposed by AutoSlog for the EJV domain. AutoSlog actually proposed 3167 concept node definitions, but the analysts were only available for two days and we did not expect them to be able to review 3167 definitions in this limited time frame. So we created an \"abridged\" version of the dictionary by eliminating iventity and product/service patterns that appeared only infrequently in the corpus. 1 The resulting \"abridged\" dictionary contained 1575 concept node definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We compared the analysts' dictionaries with the dictionary generated by UMass for the final Tipster evaluation. However, the official UMass dictionary was based on the complete set of 3167 definitions originally proposed by AutoSlog as well as definitions that were spawned by AutoSlog's optional generalization modules. We did not use the generalization modules in this experiment, due to time constraints. To create a comparable UMass dictionary, we removed all of the \"generalized\" definitions from the UMass dictionary as well as the definitions that were not among the 1575 given to the analysts. The resulting UMass dictionary was a much smaller subset of the official UMass dictionary. Analyst A took approximately 12.0 hours and Analyst B took approximately 10.6 hours to filter their respective dictionaries. Figure 2 shows the number of definitions that each analyst kept, separated by types. For comparison's sake, we also show the breakdown for the smaller UMass dictionary. IWhile processing the training corpus, AutoSlog keeps tzack of the number of times that it proposes each definition (it may propose a definition more than once if the same pattern appears multiple times in the corpus). We removed all jv-entity definitions that were proposed < 2 times and all product/service definitions that were proposed < 3 times. We eliminated jv-entity and product/service definitions only because the sheer number of these definitions overwhelmed the other types. ", "cite_spans": [], "ref_spans": [ { "start": 818, "end": 826, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Pattern:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "\"TEA~ED UP WITH \" Trigger:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "TEAMED (VERB) Doc ID: \"0016\" We compared the dictionaries constructed by the analysts with the UMass dictionary in the following manner. We took the official UMass/I-Iughes system, removed the official UMass dictionary, and replaced it with a new dictionary (the smaller UMass dictionary or an analysts' dictionary). One complication is that the UMass/Hughes system includes two modules, TFG and MayTag, that use the concept node dictionary during training. In a clean experimental design, we should ideally retrain these components for each new dictionary. We did retrain the template generator (TFG), but we did not retrain MayTag. We expect that this should not have a significant impact on the relative performances of the dictionaries, but we are not certain of its exact impact. Finally, we scored each new version of the UMass/Hughes system on the Tips3 test set. Figure 3 shows the results for each dictionary.", "cite_spans": [], "ref_spans": [ { "start": 871, "end": 879, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "The F-measures (P&R) were extremely close across all 3 dictionaries. In fact, both analysts' dictionaries achieved slightly higher F-measures than the UMass dictionary. The error rates (ERR) for all three dictionaries were identical. But we do see some variation in the recall and precision scores. We also see variations when we score the three parts of Tips3 separately (see Figure 4) .", "cite_spans": [], "ref_spans": [ { "start": 377, "end": 386, "text": "Figure 4)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "In general, the analysts' dictionaries achieved slightly higher recall but lower precision than the UMass dictionary. We hypothesize that this is because the UMass researcher was not very familiar with the corpus and was therefore somewhat conservative about keeping definitions. The analysts were much more familiar with the corpus and were probably more willing to keep definitions for patterns that they had seen before. There is usually a trade-off involved in making these decisions: a liberal strategy will often result in higher recall but lower precision whereas a conservative strategy may result in lower recall but higher precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "frequently triggered by a given test set. If the three dictionaries were in agreement on that subset of the dictionary that is most heavily used, those definitions could dominate overall system performance. Some dictionary definitions are more important than others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null }, { "text": "To summarize, this experiment suggests that domain experts can successfully use AutoSlog to build domainspecific dictionaries for information extraction. With only 1.5. hours of training, two domain experts constructed dictionaries that achieved performance comparable to a dictionary constructed by a UMass researcher. Although this was only one small experiment, the results lend credibility to the claim that domain experts can build effective dictionaries for inf(m'nation extraction. It is interesting to note that even though there was great variation across the individual dictionaries (see Figure 2 ), the resulting scores were very similar. This may be because some definitions can contribute a disproportionate amount of performance if they are ", "cite_spans": [], "ref_spans": [ { "start": 598, "end": 606, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "%JV-ENTITY-NAME-PP-ACTIVE -VERB -TEAMED -UP-WITH%", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "FIGREF0": { "text": "STORAGE BATIERY C0. ANNOUNCED IT HAS TEAMED UP WITH A LEADING FRENCH BAq'rERY MAKER, SAFT S.A., TO SET UP A JOINT VENqWJRE IN JAPAN TO MARKET SMALL BATTERIES.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Figure h The AutoSlog Interface Tool", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Comparative Scores for Partl, Part2, and Part3", "type_str": "figure", "num": null, "uris": null }, "TABREF2": { "html": null, "type_str": "table", "num": null, "text": "Riloff, E. \"Automatically Constructing a Dictionary for Information Extraction Tasks\". Proceedings of the Eleventh National Conference on ArtificialIntelligence. 1993. pp. 811-816.", "content": "
TIPS3RecallPrecision , iP&RERR
UMass/Hughes18i,5127.0683
Analyst A194727.3983
Analyst B204727.8983
Figure 3: Comparative Scores for Tips3
TIPS3/PartlRecallPrecisionP&RERR
UMass/Hughes1851=27.0483
Analyst A204828.0082
Analyst B224729.6981
TIPS3/Part2RecallPrecisionP&RERR
UMass/Hughes175226.0384
Analyst A184825.9284
Analyst B204727.7583
TIPS3/Part3RecallPrecisionP&RERR
UMass/I-Iughes205028.1282
Analyst A204627.9682
Analyst B174825.2584
" } } } }