{ "paper_id": "M93-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:33.493001Z" }, "title": "SRA: DESCRIPTION OF THE SOLOMON SYSTEM AS USED FOR. MUC-5", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sharon", "middle": [], "last": "Flank", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Doug", "middle": [], "last": "Mckee", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Paul", "middle": [], "last": "Kraus", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "SRA used a language-independent, domain-independent, multipurpose text understanding system as the cor e of the MUC-5 system for extraction from English and Japanese joint venture texts. SRA's NLP core system , SOLOMON, has been under development since 1986. It has been used for a variety of domains, and wa s aimed from the start to be language-independent, domain-independent, and application-independent. Mor e recently, SOLOMON has been extended to be multilingual, beginning with Spanish in 1990 and Japanese i n 1991. The Spanish-Japanese text understanding system that uses SOLOMON was developed for a dornai n very different from the MUC-5 joint venture domain (cf. Aone, et al. [2]). SOLOMON's principal applications have been in data extraction, but it is also used in a prototyp e machine translation system (cf. Aone and McKee [5]). The domain areas in which SOLOMON application s have been developed are : financial, terrorism, medical, and the MUC-5 joint-venture domain. SRA has significantly enhanced its capability to add new domains and languages by developing new strategies fo r data acquisition using both statistical techniques and a variety of user-friendly tools. MUC-5 SYSTEM ARCHITECTUR E SOLOMON employs a modular, data-driven architecture to achieve its language-and domain-independence. The MUC-5 system, which uses SOLOMON as a core engine, consists of seven processing modules an d corresponding data modules, as shown in Figure 1, which will be described in the following sections .", "pdf_parse": { "paper_id": "M93-1018", "_pdf_hash": "", "abstract": [ { "text": "SRA used a language-independent, domain-independent, multipurpose text understanding system as the cor e of the MUC-5 system for extraction from English and Japanese joint venture texts. SRA's NLP core system , SOLOMON, has been under development since 1986. It has been used for a variety of domains, and wa s aimed from the start to be language-independent, domain-independent, and application-independent. Mor e recently, SOLOMON has been extended to be multilingual, beginning with Spanish in 1990 and Japanese i n 1991. The Spanish-Japanese text understanding system that uses SOLOMON was developed for a dornai n very different from the MUC-5 joint venture domain (cf. Aone, et al. [2]). SOLOMON's principal applications have been in data extraction, but it is also used in a prototyp e machine translation system (cf. Aone and McKee [5]). The domain areas in which SOLOMON application s have been developed are : financial, terrorism, medical, and the MUC-5 joint-venture domain. SRA has significantly enhanced its capability to add new domains and languages by developing new strategies fo r data acquisition using both statistical techniques and a variety of user-friendly tools. MUC-5 SYSTEM ARCHITECTUR E SOLOMON employs a modular, data-driven architecture to achieve its language-and domain-independence. The MUC-5 system, which uses SOLOMON as a core engine, consists of seven processing modules an d corresponding data modules, as shown in Figure 1, which will be described in the following sections .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Inference is not performed if sentences and paragraphs are rigorously marked . The output is piped to a post-processor, which does a fast lookup of each word in a btree gazetteer, and includes entry information in the tokens of place names .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Preprocessing consists of two processors, the morphological analyzer and the pattern matcher, and associate d data in the form of morphological data, lexicons, and patterns for each language . Its input is a tokenized message, and its output is a series of lexical entries with syntactic and semantic attributes .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessin g", "sec_num": null }, { "text": "Declarative morphological data for inflection-rich Japanese and Spanish is compiled into finite-stat e machines . The English domain lexicon was derived from development texts automatically, using a statistica l technique (cf. McKee and Maloney [10] ) . This derived lexicon also contains automatically acquired domainspecific subcategorization frames and predicate-argument mapping rules called situation types (cf. Aone ", "cite_spans": [ { "start": 245, "end": 249, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 417, "end": 421, "text": "Aone", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessin g", "sec_num": null }, { "text": "McKee [3] ), as shown in Figure 2 .", "cite_spans": [ { "start": 6, "end": 9, "text": "[3]", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 25, "end": 33, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "an d", "sec_num": null }, { "text": "Pattern recognition handles a wide range of phenomena, including multi-words, numbers, acronyms , money, date, person names, locations, and organizations . We extended the Pattern matcher to handle multilevel pattern recognition . The pattern data are divided into ordered multiple groups called priority groups, and the patterns in each group are fired sequentially, avoiding recursive applications as much as possible . This extension speeded up the performance of Preprocessing significantly .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "an d", "sec_num": null }, { "text": "The processor for Syntactic Analysis is a parser based on Tomita 's algorithm (cf. Tomita [11]), with modifications for disambiguation during parsing . Syntactic Analysis data consist of X-bar based phrase structur e grammars and preparse patterns for each of the three languages, English, Japanese, and Spanish . Syntacti c Analysis outputs F-structures (grammatical relations), along the lines of Lexical-Functional Grammar (cf . Bresnan [7] ), as shown in Figure 3 . The Semantic Interpretation module is interleaved for disambiguatio n Preparsing takes the burden off of main parsing and increases accuracy, by recognizing structures such a s sentential complements, appositives, certain PP's, etc . by pattern matching, and sending these to the parse r as chunks . These preparse chunks are parsed prior to main parsing using the same grammars, and thei r output consists of F-structures as well .", "cite_spans": [ { "start": 440, "end": 443, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 459, "end": 467, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "\u2022 Appositives : Or i~\"industry's largest Tokyo Kaijou \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "\u2022 Sentences with certain verb endings :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "' 7 X . ]I ~. WE . I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "\u2022 PP 's : start production [in january 1990] with production of 20,000 iro n", "cite_spans": [ { "start": 27, "end": 44, "text": "[in january 1990]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "In order to test the progress of grammar development and pinpoint trouble spots, automatic evaluatio n of grammars was used . SRA adapted the community-wide program Parseval (cf. Black, et al . [6] ) for use in Japanese in addition to English . Testing on Japanese was limited, since there are not many brackete d Japanese texts to use as answer keys .", "cite_spans": [ { "start": 194, "end": 197, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Analysi s", "sec_num": null }, { "text": "Semantic Interpretation uses a language-independent processing module, and its data are predicate-argumen t mapping rules for each verb, plus both core and domain knowledge bases . Semantic Interpretation work s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Interpretatio n", "sec_num": null }, { "text": "A JAPANESE TRADING HOUSE . . . ) . Domain knowledge bases, on the other hand, were acquired manually . However, a new rapid knowledg e acquisition tool called KATooI was used to link a lexical entry to its corresponding semantic concept in th e knowledge bases (cf. Figure 5 ) .", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 274, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "BRIDGESTONE SPORTS CO . SAID FRIDAY IT HAS SET UP A JOINT VENTURE IN TAIWAN WITH A LOCAL CONCERN AN D", "sec_num": null }, { "text": "If a full parse cannot be created, SOLOMON uses a fragment combination strategy . Debris Parsin g and its subsequent process, Debris Semantics, work together to obtain the best interpretation from sentenc e fragments . They use as data the grammars and knowledge bases, and they output semantic structures jus t like when a full parse is created . Debris Parsing retrieves the largest and most preferred constituents from the parse stack . It then reparses the rest of the input, and creates debris F-structures with the best fragmen t constituents . Debris Semantics relies on the semantic interpreter to process each fragment, and then fit s fragments together using semantic constraints on unfilled slots .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "[ST : SUBJECT : [ST : HEAD : IT ] PREDICATE : [ST : TENSE : PRESENT ASPECT : PERFECT PREDICATE : (CREATE ) ROOT : SET VERB-PARTICLE : UP ] OBJECT : [ST : HEAD : A-JOINT-VENTURE] PREP-ARGS : ([ST : MARKED : WIT H HEAD : A-LOCAL-CONCERN-AND-A-JAPANESE-TRADING-HOUSE] ) ADJUNCTS : ([ST : MARKED : I H HEAD : TAIWAN])]] ]", "sec_num": null }, { "text": "Discourse Analysis, which was redesigned and implemented this year (cf . Aone and McKee [4] ), performs reference resolution . Discourse Analysis uses a data-driven architecture to achieve language-independence , domain-independence, and extensibility . It employs a single language-independent, domain-independen t processor, and several discourse knowledge bases, some of which are shared among different languages . Th e output, of Discourse Analysis is a set of semantic structures with coreference links added, i .e . File Card s (cf. Heim [9] ) . Discourse phenomena handled for the joint venture domain include name anaphora (e .g . \\> ;", "cite_spans": [ { "start": 88, "end": 91, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 545, "end": 548, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Analysi s", "sec_num": null }, { "text": "x ,3 :33\\MAt'V Y The system traces for English and Japanese walkthrough examples are shown in Figure 6 and Figure 7 . In the English example, the two instances of name anaphora for \"Bridgestone Sports Co .\" are recognized , while in the Japanese example, all the references to \"Tokyo Kaijou Kasai Hoken, \" including appositives, ar e resolved .", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 102, "text": "Figure 6", "ref_id": null }, { "start": 107, "end": 115, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Discourse Analysi s", "sec_num": null }, { "text": "Pragmatic Inferencing performs reasoning in order to derive implicit information from the text, using a forward chainer and inference rules . Pragmatic Inferencing outputs semantic structures, with inferred inforinat ion added . It infers additional information from \"literal\" meanings as required for application domains . For instance, in the walkthrough example, in order to infer \"THE TAIWAN UNIT \" is a joint venture company frorr, the phrase \"THE ESTABLISHMENT OF THE TAIWAN UNIT\" the following rule is used . It is easy for developers to add, change or remove inferred information due to the declarative nature o f the inference rules . For instance, to get an additional tie-up from \"Company A and Company B tied wit h Company C \" , in ,t, ty-000''2, we just, had to add another rule to infer that. when companies \"tie,\" they form a tie-up . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pragmatic Inferencin g", "sec_num": null }, { "text": "The Extract module performs template generation, translating the domain-relevant portions of our languageindependent semantic structures into database records . We maintain a strong distinction between processin g and data even in template generation . Thus, we use the same processing module to output in differen t languages and to several database schemata, including to a flat template-style schema as in MUC-4 and t o a more object-oriented schema as in MUC-5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extract", "sec_num": null }, { "text": "To do the actual template filling, we rely on Extract data made up of kb-object/slot to db-table/fiel d mapping rules and conversion functions for the individual values (e .g . set fills, string fills) . For example, th e #nationality slot of an #ORGANIZATION object in our knowledge base corresponds to the Nationalit y field of the Entity object in the MUC-5 template .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extract", "sec_num": null }, { "text": "SOLOMON is designed for reusability . Each processing module is data-driven and reusable in other languages and other domains, as well as in applications other than data extraction (e .g . machine translation , abstracting, summarization) . A large portion of the data is also reusable in : The data acquisition tools and techniques are also reusable in other languages and domains . The statistical techniques used to derive lexical information can be reused for other domains . LEXTooI, the lexicon acquisition tool, is multilingual and relies on system data files for category and morphological information . KBTooI, the knowledge base acquisition tool, is language-independent just as the knowledge bases ar e language-independent . KATool, the knowledge acquisition tool that links lexicon entries with the appropriate knowledge base concepts, is entirely data-driven as well, and is therefore completely reusable . Figure 8 summarizes the reusability of SRA ' s MUC-5 system .", "cite_spans": [], "ref_spans": [ { "start": 921, "end": 929, "text": "Figure 8", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "REUSABILITY OF THE SYSTE M", "sec_num": null }, { "text": "Our MUC-5 results for the English and Japanese joint-venture domain task are shown in Table 1 . We spen t 10 .55 person-months for this task, most of which were devoted to data development for both languages (se e Table 2 ) . The \"other\" category includes time spent on developing language-independent data such as a joint-venture domain knowledge base, pragmatic inference rules, and Extract data for template generation .", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 214, "end": 221, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "TEST RESULTS AND ANALYSIS", "sec_num": null }, { "text": "We believe that the results do not indicate the potential of our system, since the system performance fo r both languages was still improving after five months of development . Much of the work we did resulted in long-term improvements to our overall text understanding capability, all of which will ensure a stronger base system for future applications . This implies that although the development cycle for data extraction system using a text understanding system may be slower in its current maturity stage, the potential for such a syste m is still unknown and represents a most promising avenue for development . We are particularly pleased wit h the success of our Japanese system : no other Japanese MUC-5 site is using the full understanding approach , but we did as well and our performance continues to improve )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TEST RESULTS AND ANALYSIS", "sec_num": null }, { "text": "Staff time was the major limiting factor . We needed more time to perform more testing and evaluation l In the 18-month Tipster evaluation, the highest JJV F-measure was about 40 . using the scoring program, and to finely tune Extract (template generation) mapping rules . We discovered we were hampered by formatting errors, and in addition considerable information was \"understood\" by th e system all the way through, but was not extracted by the template generator . Since the discourse modul e was new, it would have been helpful to have additional time to test and expand it . In addition, we neede d more time to fill the OWNERSHIP, REVENUE, and TIME objects, which we simply did not output .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TEST RESULTS AND ANALYSIS", "sec_num": null }, { "text": "Overall, the data-driven architecture in SOLOMON allowed for minimum work on processing modules whe n working on different languages and domains . We ported the system to Spanish in a week for the demonstration given, at the MUC-5 conference .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": null }, { "text": "Although we successfully acquired large amounts of domain data from domain texts in both languages , using both statistical methods and newly developed user-friendly knowledge acquisition tools, we recogniz e the need to move even more quickly to new domains and languages . We plan to continue our work on automatic acquisition of lexicons, knowledge bases, and links between them in multiple languages .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": null }, { "text": "Tuning performance of each module (e .g. parsing, discourse analysis) as well as the' performance o f the whole system to a particular task more rapidly is another research issue we identified . We believe that developing automatic evaluation and training algorithms for such automated module/system tuning is crucia l to develop a data extraction system that produces optimal results .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": null } ], "back_matter": [ { "text": "We are indebted to Rajeev Agarwal, Debbie Sanders, and Vera Zlatarski for their hard work and dedicatio n in data development, module testing, and more . We also gratefully acknowledge the contributions of Scot t Bennett, David Garfield, and Hatte Blejer to the MUC-5 process .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENT S", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Compilers : Principles, Techiniques and Tools", "authors": [ { "first": "Alfred", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "Revi", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Jeffry", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alfred V . Aho, Revi Sethi, and Jeffry D . Ullman . Compilers : Principles, Techiniques and Tools. Addison-Wesley, 1986 .", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Murasaki Project : Multilingual Natural Language Understanding", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Hatte", "middle": [], "last": "Blejer", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Flank", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Mckee", "suffix": "" }, { "first": "Sandy", "middle": [], "last": "Shinn", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the ARPA Human Language Technology Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinatsu Aone, Hatte Blejer, Sharon Flank, Douglas McKee, and Sandy Shinn . The Murasaki Project : Multilingual Natural Language Understanding . In Proceedings of the ARPA Human Language Technol- ogy Workshop, 1993 .", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Acquiring Predicate-Argument Mapping Information from Multilingual Texts", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Mckee", "suffix": "" } ], "year": 1993, "venue": "Acquisition of Lexical Knowledge from Text : Proceedings of a Workshop Sponsored b y the Special Interest Group on the Lexicon of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinatsu Aone and Doug McKee . Acquiring Predicate-Argument Mapping Information from Multilin- gual Texts . In Acquisition of Lexical Knowledge from Text : Proceedings of a Workshop Sponsored b y the Special Interest Group on the Lexicon of the Association for Computational Linguistics, 1993 .", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language-Independent Anaphora Resolution System for Understanding Multilingual Texts", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Mckee", "suffix": "" } ], "year": 1993, "venue": "Proceedings of 31st Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinatsu Aone and Doug McKee . Language-Independent Anaphora Resolution System for Understand- ing Multilingual Texts . In Proceedings of 31st Annual Meeting of the ACL, 1993 .", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Three-Level Knowledge Representation of Predicate-Argument Mapping for Multilingual Lexicons", "authors": [ { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Mckee", "suffix": "" } ], "year": 1993, "venue": "AAAI Spring Symposium Working Notes on Building Lexicons fo r Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinatsu Aone and Doug McKee . Three-Level Knowledge Representation of Predicate-Argument Map - ping for Multilingual Lexicons . In AAAI Spring Symposium Working Notes on Building Lexicons fo r Machine Translation, 1993 .", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Procedure fo r Quantitatively Comparing the Syntactic Coverage of English Grammars", "authors": [ { "first": "E", "middle": [], "last": "Black", "suffix": "" }, { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "D", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "C", "middle": [], "last": "Gdaniec", "suffix": "" }, { "first": "R", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "P", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "R", "middle": [], "last": "Ingria", "suffix": "" }, { "first": "F", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "J", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "M", "middle": [], "last": "Liberman", "suffix": "" }, { "first": "M", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "T", "middle": [], "last": "Strzalkowski", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Fourt h DARPA Speech and Natural Language Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E . Black, S . Abney, D . Flickinger, C . Gdaniec, R . Grishman, P . Harrison, D . Hindle, R . Ingria, F . Jelinek , J . Klavans, M . Liberman, M . Marcus, S . Roukos, B . Santorini, and T . Strzalkowski . A Procedure fo r Quantitatively Comparing the Syntactic Coverage of English Grammars . In Proceedings of the Fourt h DARPA Speech and Natural Language Workshop, 1991 .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Mental Representation of Grammatical Relations", "authors": [], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Bresnan, editor . The Mental Representation of Grammatical Relations . MIT Press, 1982 .", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The SGML Handbook", "authors": [ { "first": "Charles", "middle": [ "F" ], "last": "Goldfarb", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles F . Goldfarb . The SGML Handbook . Oxford, 1990 .", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Semantics of Definite and Indefinite Noun Phrases", "authors": [ { "first": "Irene", "middle": [], "last": "Heim", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Heim . The Semantics of Definite and Indefinite Noun Phrases . PhD thesis, University of Mas- sachusetts, 1982 .", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using Statistics Gained from Corpora in a Knowledge-Based NL P System", "authors": [ { "first": "Doug", "middle": [], "last": "Mckee", "suffix": "" }, { "first": "John", "middle": [], "last": "Maloney", "suffix": "" } ], "year": 1992, "venue": "Proceedings of The AAAI Workshop on Statistically-Based NLP Techniques", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doug McKee and John Maloney . Using Statistics Gained from Corpora in a Knowledge-Based NL P System . In Proceedings of The AAAI Workshop on Statistically-Based NLP Techniques, 1992 .", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Efficient Parsing for Natural Language", "authors": [ { "first": "Masaru", "middle": [], "last": "Toinita", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masaru Toinita . Efficient Parsing for Natural Language . Kluwer, Boston, 1986 .", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "MUC-5 System ArchitectureSentence and paragraph boundries are inferred using a conservative algorithm and marked as inferred .", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Statistically Acquired Lexical Entrie s of prepositional phrase attachment, conjunctions, and so on, by calling semantic functions, which are share d by all three languages, from inside the grammar .", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Simplified F-Structure Output by Syntactic Analysi s off of language-neutral F-structures in order to handle all the languages . It outputs semantic structures, i .e . predicate-argument and modification relations, as shown in Figure 4 . The predicate-argument mapping rule s (i .e . rules which map F-structures to semantic structures) are acquired automatically (cf . Aone and McKee [3]", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": ". SAID FRIDAY IT HAS SET UP A JOINT VENTURE I I TAIWAN WITH A LOCAL CONCERN AND A JAPANESE TRADING HOUS E Semantic (Predicate-Argument) Structur e3\\ v\\~.\\J\\l :a~~~il:X25.", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": "Knowledge Acquisition Too l 211 DISCOURSE : Classified $(\"BRIDGESTONE SPORTS\") as DP-NAM E DISCOURSE : Found an exact match , ante : $(DISCOURSE-MARKER DISCOURSE-MARKER-83>(\"BRIDGESTONE SPORTS CO .\" ) ref : $(\"BRIDGESTONE SPORTS\" ) DISCOURSE : Classified $(\"BRIDGESTONE SPORTS\") as DP-NAM E DISCOURSE : Found an exact match , ante : $(\"BRIDGESTONE SPORTS\" ) ref : $(DISCOURSE-MARKER DISCOURSE-MARKER-206>(\"BRIDGESTONE SPORTS\" ) English Discourse Trace Exampl e => IMLEA:%glIg I)ISCOURSE : Classified #( \" 1 # , .Z*k. \" ) as DP-NAM E DISCOURSE : Found an exact match , ante : #(\" 1#AE\") as DP-NAME DISC(.)URSE : Found an exact match , ante : #(\" \" ) ref : #(' , E\") Japanese Discourse Trace Exampl e \"BRJl)GESTONE SPORTS\" for \"BRIDGESTONE SPORTS CO .\") and definite NP's such as \"THE NE W ('OMPAN l", "type_str": "figure" }, "FIGREF5": { "num": null, "uris": null, "text": "defrule rule-0009 ((?event) (?event) ) :example (\"PNI and SRA established a new company .venture-company ?event ?x ) (in-jv-event ?x ?event)))", "type_str": "figure" }, "FIGREF7": { "num": null, "uris": null, "text": "General pattern data (e .g . date, location, personal name, organization name ) Grammars Some of the discourse knowledge source s \u2022 Other language s -Domain knowledge bases", "type_str": "figure" }, "FIGREF8": { "num": null, "uris": null, "text": "Reusability of SRA ' s MUC-5 System -Some of the discourse knowledge sources -Inference rule s -Extract (template generation) dat a", "type_str": "figure" }, "TABREF1": { "html": null, "content": "
taskperson-months
EJV3 . 2
JJV2 . 2
Testing1 . 5
Documentation0 .2 5
Other3 . 4
", "text": "SRA ' s Scores for the English and Japanese Joint Venture Domai n", "type_str": "table", "num": null }, "TABREF2": { "html": null, "content": "", "text": "", "type_str": "table", "num": null } } } }