{ "paper_id": "X96-1034", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:05:37.571013Z" }, "title": "AN EVALUATION OF COREFERENCE RESOLUTION STRATEGIES FOR ACQUIRING ASSOCIATED INFORMATION", "authors": [ { "first": "Lois", "middle": [ "C" ], "last": "Childs", "suffix": "", "affiliation": { "laboratory": "", "institution": "Lockheed Martin Corporation", "location": { "postBox": "P.O. Box 8048", "postCode": "19101", "settlement": "Philadelphia", "region": "PA" } }, "email": "lois@mds.lmco.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "X96-1034", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "As part of our TIPSTER research program [Contract Number 94-F133200-000], we have developed a variety of strategies to resolve coreferences within a free text document. Coreference is typically defined to mean the identification of noun phrases that refer to the same object. This paper investigates a more general view of coreference in which our automatic system identifies not only coreferenfial phrases, but also phrases which additionally describe an object. Coreference has been found to be an important component of many applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "The following example illustrates a general view of coreference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "American Express, the large financial institution, also known as Amex, will open an office in Peking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "In this example, we would like to associate the following information about American Express: its name is American Express;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "an alias for it is Amex; its location is Peking, China; and it can be described as the large financial institution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "In the work described in this paper, our goal was to evaluate the contributions of various techniques for associating an entity with three types of information: 1. NameV~atious 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "Descriptive Phrases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": null }, { "text": "The MUC6 Template Element task is typical of what our applications often require; it encapsulates in-formation about one entity within the Template Element. Since we have a way to evaluate our performance on this task via the MUC6 data, we used it to conduct our experiments. The corpus for the MUC6 Template Element task consists of approximately 200 documents for development (pre-and post-dry-run) and 100 documents for scoring. The scoring set had previously been held blind, but it has been released for the purposes of a thorough evaluation of our metheds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Location Information", "sec_num": null }, { "text": "Scores discussed in this paper measure performance of experimental system reconfigurafions run on the 100 documents used for the final MUC6 evaluation. These scores were generated for inter-experiment comparison proposes, using the MUC6 scoring program, v 1.3. Scores reported here are relevant only as relative measures within this paper and are not meant to represent official performance measu~s. Official MUC6 scores were generated using a later version of the scoring program. Furthermore, the scoring program results can vary depending on how the mapping between respouse and answer key is done. For example, if an automarie system has failed to make the link between a descdptor and a name, it may create two objects ---one for each. The scoring system must then decide which object to map to the answer key. The scoring program tries to optimize the scores during mapping but, if two objects would score equally, the scoring program chooses arbitrarily, thus, in effect, sacrificing a slot as a penalty for coreference failure. In the following example, the slot can be either NAME or DESCRIPTOR, depending on the mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring", "sec_num": null }, { "text": "American Express TYPE: COMPANY Obiect2 DESCRIPTOR:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obiect 1 NAME:", "sec_num": null }, { "text": "TYPE:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obiect 1 NAME:", "sec_num": null }, { "text": "the large financial institution COMPANY", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obiect 1 NAME:", "sec_num": null }, { "text": "Additionally, the answer key contains optional objects which are included in the scoring calculations only ff they have been mapped to a response object. 'ntis sometimes causes a fluctuation in the number of possible correct answers, as reported by the scoring program. The scores, therefore, do not represent an absolute measure of performance. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obiect 1 NAME:", "sec_num": null }, { "text": "Identifying variations of a person name or organization name is a basic form of coreference that underlies other strategies. Our process stores each newly recognized named entity, along with its computed variations and acronyms. The variations and acronyms are algorithmlcally generated without reference to the text. These are stored in a temporary lexicon so that variations of the name in the text can be recognized and linked to the original occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NAME VARIATIONS", "sec_num": "2." }, { "text": "A careful examination of the name/alias results provides insight into the success of this technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NAME VARIATIONS", "sec_num": "2." }, { "text": "Approximately two-thirds of the aliases were correctly identified. Of the one-third which were missed, besides an unfortunate system error which threw away four aliases which the system had found, five main groups of error were found. They can be categorized as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NAME VARIATIONS", "sec_num": "2." }, { "text": "There were approximately five missed aliases that involved corporations and their subsidiaries. In these cases, the aliases were assigned to the wrong entity. Usually, these were stories in which corporate officers were transferring from one part of a company to another. Confusien can quickly ensue when trying to link an alias with the correct entity in this case. (This is often true for the human reader, as well.) Find the three organizations in the following list of phrases: Of course, presentation of the names as a list is unfair to the reader because it eliminates all context cues. Rules which allow the automatic system to take greater advantage of context cues will be developed for such specialized areas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corporate Subsidiaries", "sec_num": null }, { "text": "EMI Records Group,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corporate Subsidiaries", "sec_num": null }, { "text": "Another five missed aliases were found in scenarios of changing corporate identity. By the rules of the Template Element task, the old name should become the alias of the new name. When these scenarios went un-recognized by the system, the names were tagged as separate entities. The following is an example of a confusing name changing scenario which the automatic system missed. Because there is sc~te uncertainty within the text as to whether the change has already taken place, the second entity is given optional names covering both alternatives. This is difficult for an automatic extraction system to decipher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corporate Name Changes", "sec_num": null }, { "text": "Many aliases are found because they are variations of names which have been recognized by their form (i.e., they contain a corporate designator -Co.) or by their context (e.g., CEO of Atlas). Approximately ten missed aliases were due to the fact that the names themselves were not recotmiTed. Improvement of name recognition is an on-going process as the system and its developers are exposed to more and more text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Missing Name", "sec_num": null }, { "text": "Name variations are generated algofthmically. There were only four aliases missed because they were not generated from the full name. Examination of the results has uncovered two new rules for making variations. These will be added to the set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incomplete Name Variation", "sec_num": null }, { "text": "First, the abbreviation portion of the name should be included within an acronym, for example, ARCO as alias for Atlantic Richfield Company and RLA as alias for Rebuild L.A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incomplete Name Variation", "sec_num": null }, { "text": "Second, a structural member like Chamber or Partnership can stand alone as a variation, for example, Chamber as alias for Chamber of Commerce and Partnership as alias for New York City Partnership.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incomplete Name Variation", "sec_num": null }, { "text": "It should be noted that our rule packages employ variable bindings to collect information during the pattern matching process. In the case of name variations, it would be helpful to tag the pattern's structural members that can stand alone as variants during the rule binding process. This can then guide the variation generator when that pattern has been matched.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incomplete Name Variation", "sec_num": null }, { "text": "Seven PERSON aliases were missed because the system did not know the firstname, e.g. Clive, Vana, Rupert. The solution to this problem is not only to expand the system's knowledge of human firsmames, but also to widen the context which can trigger human name recognition. The system will be expanded to rec~ize as human those unknown words which are laki~g human roles, such as participating in family relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unusual Firstname", "sec_num": null }, { "text": "Our system had the second highest score in organization alias identification in the MUC6 evaluation. (See the MUC6 Conference proceedings for official scores.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on the Name/Alias Task", "sec_num": null }, { "text": "OF~iANIZATION ALIAS SCORE 0/1.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on the Name/Alias Task", "sec_num": null }, { "text": "ACT COR INC REC PRE 170 153 110 2 65 72 Person alias scores were suppressed by 5 points of recall due to an error in the gender reference code. The following show the original scores and those after the error has been fixed. ", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 48, "text": "COR INC REC PRE 170 153 110 2 65 72", "ref_id": null } ], "eq_spans": [], "section": "PO6", "sec_num": null }, { "text": "Associating an organization name with a descriptor requires resolving coroferences among names, noun phrases, and pronouns. Several techniques are involved here. Appositives, prenominals, and name-modified head nouns are directly associated with their respective named entities during name recognition. After noun phrase recognition, those phrases which have not already been associated with a name are compared against known names in the text in order to fred the correct referent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DESCRIPTIVE PHRASES", "sec_num": "3." }, { "text": "During name recognition, entities are direcdy linked, via variable bindings within the patterns, with descriptive phrases that make up their context. This is a thrifty process because it allows the system to mine the very context which it has used to recognize the entity in the first place, thus allowing it to store linked information with the entity discovered. In this manner, the system is able to link descriptive phrases that are found in the following forms: Since the Template Element task described here res~ctea the descriptor slot to a single phrase, our system sought to choose the most reliable of all the phrases which had been linked to an entity. It did this by ranking the descriptors based on their syntactic role. The following is the ranking used for the MUC6 system: This ranking gives greater confidence to those descriptors associated by context, with the default choice, the longest descriptor, having been associated by reference. 70% of our system's name-linked descriptors were associated by context. This is not surprising in view of our ranked selection system. The following is a score of the original configuration, using the ranked selection system. When the ranking is abandoned and the selection is based on the longest descriptor alone, 62% of the response descriptors are drawn from those associated by context. This change has a deleterious effect on the scores for the descriptor slot and confirins our hypothesis that the context-associated descriptors are more reliable. A surprising result of this experiment is that the percentage of descriptors associated by context is still so high. This is believed to be due to their predominance within the set of noun phrases found by our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Context", "sec_num": null }, { "text": "DESCRIPTOR SCORE 0/1.3) -LONGEST", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Context", "sec_num": null }, { "text": "Once an organization noun phrase has been recognized, the reference resolution module seeks to find its referent. This process involves several steps. First, the phrase is checked to mske sure it hasn't already been associated by context. If not, a content filter for the phrase is run against a content filtered version of each known organization name; if there is a match, the link is made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Reference", "sec_num": null }, { "text": "Content Filters: \"the jewelry chain\" =>(jewelry jewel chain ) =Smith Jewelers\" =>( smith jewelers jeweler jewel )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Reference", "sec_num": null }, { "text": "For example, if the organization noun phrase, \"the jewelry chain\" is identified, its content filter would be applied to the list of known company names. When it reaches \"Smith Jewelers\", it will compare the falter against a faltered version of the name. The best match is considered the referent. If there is a fie, file position is considered as a factor, the closest name being the most likely referent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Reference", "sec_num": null }, { "text": "To assess the value of this filtering mechanism, the MUC6 evaluation corpus was processed without the illmr. The following results show that the falter did help the system link the correct descriptors; without it, the system lost five points of recall and seven points of precision. For genetic phrases like \"the company\" and for pronouns referring to people, reference is currently determined solely by file position and entity type. Plans have been formulated to increase the sophistication of this selection process, and to expand the system to bandie coreference of pronouns to organizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Reference", "sec_num": null }, { "text": "DESCRIPTOR SCORE 0/1,3) -WFI'I-IOUT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Association by Reference", "sec_num": null }, { "text": "Became of the possibility that a text may refer to an un-named organization by a noun phrase alone, it is necessary to recognize all definite and iMefmite noun phrases that may refer to an organization. The following are examples of some un-named organiTations: Those phrases that have not already been associated with a named entity through context cues must then be associated by reference, if possible. For every definite noun phrase, if a reference can be found, it will be associated with that entity; otherwise, it will become an un-named entity. Every indefinite noun phrase that cannot be associated by context becomes an un-named entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named vs. Un-named Organizations", "sec_num": null }, { "text": "During the f'dtering process, the system used an additional heuristic to decide whether to apply a content filter to a noun phrase, or to make it into an un-named entity. If a noun phrase is found to be especially rich in description, it is thought to he too specific to refer to a previous entity, and is made into an un-named entity. This heuristic turned out to be detrimental to performance; it suppressed the descriptor scores substantially. When the original configuration (i.e. ranked selection, favoring appositives) is run, without this heuristic, an increase of four recall and three precision points is achieved. DESCRIPTOR SCORE (V1,3) ", "cite_spans": [], "ref_spans": [ { "start": 624, "end": 647, "text": "DESCRIPTOR SCORE (V1,3)", "ref_id": null } ], "eq_spans": [], "section": "Named vs. Un-named Organizations", "sec_num": null }, { "text": "The majority of descriptors reported were found through association by context, even when the \"longest descriptor\" selection method is used. This is partly due to the relative scarcity of -nattached:l organizational noun phrases. Sixty-eight of the 224 possible descriptors were missed because the system did not recognize the noun phrase as describing an organization. When the key's descriptive phrases were added directly to the system's knowledge base, as a hard-coded rule package, to eliminate this variable, the following scores were produced. The responses scored were produced with the original system configuration which uses the ranked selection system. When the system reveas to preferring the longest descriptor, the following scores are achieved. The decline in scores adds further confirmation to our hypothesis that the context-associated descriptors are more reliable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context vs. Reference", "sec_num": null }, { "text": "Finally, techniques for associating an organization name with location information are examined. This is an extension of traditional coreference, but a task we do in many applications. Biographical information about a person often falls into this category, e.g. address, telephone, or passport information. The intuition is that location inftnmation is found frequently within descriptive noun phrases and is extractable once that link has been established.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4, LOCATION INFORMATION", "sec_num": null }, { "text": "This approach was evaluated by examining the source of the answer key's locale fillers. It was found that 67% originated in appositives, prenominals, and post-modifiers, and 20% originated in other descriptive noun phrases. Breaking this down further, our system found 60% of those kxmle fillers which originated in prenominals, appositives, and post-modifiers, and 57% of the other 20%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4, LOCATION INFORMATION", "sec_num": null }, { "text": "In the work described in this paper, our goal was to evaluate the contributions of various coreference resolution techniques for acquiring information associated with an entity. We looked at our system's perfcxlnance in the MUC6 Template Element evaluation in three areas:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "1. Name Variations 2. Descriptive Phrases", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": "5." }, { "text": "Five areas were identified in which improvement to the name variation code is needed. Two areas will be improved by better modeling the events which may effect organizational names, e.g. the forming of subsidiaries and the changing of names. This can be extended to include other organizational events, such as corporate joint ventures. The third area. missing names, is an area of on-going improvement. Two new rules were identified to help the name variation algorithm, The last area of improvement, person names, can be improved on two fronts: 1) expanding the knowledge base of accepted first names, grouped by ethnic origin, and 2) better modeling frequent behaviors in which person names participate. The latter will be explored through automatic acquisition of person name context over a large corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Variations", "sec_num": null }, { "text": "Despite the many areas for improvement that were identified, our system still had the second highest recall measure in organization alias, confirrning the basic soundness of our approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name Variations", "sec_num": null }, { "text": "Examination of our system's performance in associating descriptive phrases to a referent entity brought us to several conclusions regarding our system's techniques. First, our method of directly linking entities to the descriptive phrases that make up their context via variable bindings within patterns has been very successful. Second, the content filter does contribute to the effectiveness of our coreference resolution; its absence caused our scores to decline. It may be improved by expanding the falter to include semantic categories via a facility like WordNet, or through our internal conceptual hierarchy. Third, the heuristic that caused the system to discard phrases that it deemed too specific,for resolution was extremely bad and costly to our performance. Fourth, our recognition of organizational noun phrases needs improvement. This may also benefit from a survey of typical contexts over a large corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Descriptive Phrases", "sec_num": null }, { "text": "Our system's success in identifying associated location information was due mainly to Our methed of collecting related information during name recognition, since 67% of the answer key's location information could be found within appositives, prenominals, and post-modifiers. As our methods of associating noun phrases by reference improves, our ability to associate location information may improve, as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Location Information", "sec_num": null }, { "text": "Ill summary, Our system has incorporated many new techniques for associating coreferential information as part of our TIPSTER research program. This paconcludes that most of the techniques have been beneficial to our performance and suggests ways to further improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Performance", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "descriptor (found by reference)", "num": null }, "TABREF5": { "type_str": "table", "content": "
POSACTCORINCRECPRE
17015714618693
PERSON ALIAS SCORE 0/1.3) -ERROR FIXED
POSACTCORINCRECPRE
17016715519193
", "text": "", "html": null, "num": null }, "TABREF12": { "type_str": "table", "content": "
MUCster Group, a New York consulting
firm,
PRENOMINAL
the New York consulting firm, MUCster
Group
POST-MODIFIERS
MUCster Group (New York)
MUCster Group is based in New York
LOCALE/COUNTRY SCORE 0/1 ,.3)
POSACTCORINCRECPRE
11410567105964
1151027526574
", "text": "This may account for our system's superior performance in identifying locale/country information; our scores were the highest of the MUC6 participants. (See the MUC6 Conference proceedings for official scores.) We believe that this success is due to our method of collecting related information during name recognition.", "html": null, "num": null } } } }