{ "paper_id": "C94-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:49:42.060486Z" }, "title": "Coping With Ambiguity in a Large-Scale Machine Translation System", "authors": [ { "first": "Kathryn", "middle": [ "L" ], "last": "Baker", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Franz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" }, { "first": "Pamela", "middle": [ "W" ], "last": "Jordan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" }, { "first": "Eric", "middle": [ "H" ], "last": "Nyberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In an interlingual knowledge-based machine translation system, ambiguity arises when the source 1.qnguage analyzer produces more than one interlingua expression for a source sentence. This can have a negative impact on translation quality, since a target sentence may be produced from an unintended meaning. In this paper we describe the ,nethods nsed in the KANT machine translation system to reduce or eliminate ambiguity in a large-scale application domain. We also test these methods on a large corpus of test sentences, in order to illustrate how the different disambiguation methods redtuce the average number of parses per sentence,", "pdf_parse": { "paper_id": "C94-1012", "_pdf_hash": "", "abstract": [ { "text": "In an interlingual knowledge-based machine translation system, ambiguity arises when the source 1.qnguage analyzer produces more than one interlingua expression for a source sentence. This can have a negative impact on translation quality, since a target sentence may be produced from an unintended meaning. In this paper we describe the ,nethods nsed in the KANT machine translation system to reduce or eliminate ambiguity in a large-scale application domain. We also test these methods on a large corpus of test sentences, in order to illustrate how the different disambiguation methods redtuce the average number of parses per sentence,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The KANT system [Mitamura etal., 1991] is a system for Knowledge-basexl, Accurate Natural-language Translation. The system is used in focused technical domains for multilingual translation of controlled source language documents. KANT is an interlingua-based system: the sonrce language analyzer produces an interlingua expression for each source sentence, and this interlingua is processed to produce the corresponding target sentence. The problen3 el' ambiguity arises when the system produces more that~ ()tie interlingua representation for a single input sentence. If the goal is to automate translation and produce output that does not require post-editing, then the presence of ambiguity has a negative impact on translation quality, since a target sentence may he produced from an unintended meaning. When it is possible to limit tile interpretations of a sentence to just those that are coherent in the translation domain, then the accuracy of the MT system is enhanced.", "cite_spans": [ { "start": 16, "end": 38, "text": "[Mitamura etal., 1991]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Ambiguity can occnr at different levels of processing in source analysis. In this paper, we describe how we cope with ambiguity in the KANT controlled lexicon, grammar, and semantic domain model, and how these :ire designed to reduce or eliminate ambiguity in a given translation domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The KANT domain lexicon and grammar are a constrained subset of the general source language lexicon and gra,nmar. The strategy of constraining the source text has three main I I l-:igurc 1: The KANT System goals. First, it encourages clear and direct writing, which is beneficial to both the reader of tile source text and to the translation process. Second, it facilitates consistent writing among tile many authors who use the system and across all document types. And third, the selection of unambiguous words :111(I constructions to be used during authoring reduces the necessity for ambiguity resolution during the auto,natic stages of processing. It is important to reduce the processing overhead associaled wilh amhiguity resolution in order tokeep tile system fast enough for on-line use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constraining the Source Text", "sec_num": "2" }, { "text": "The domain lexicon is built using corpt, s analysis. Lists of terms, arranged by part of speech, are automatically extracted from the corpus [Mitamura etal., 1993] . \"File lexicon consists of closed-class general words, open-class general words, idioms, and nomenclature phrases. Closed-class general words (e.g. the, with. should) are taken from general English. Open-class general words (e.g. drain, run, hot) are limited in the lexicon to one sense per part of speech with some exceptions ~. Idioms (e.g. on and off) and nomencl> tnre phrases (e.g. summing valve) are domain-specilic and are limited to those phrases identilied in the domain corpus. Phrases, too, are delined with a single sense. Special vncabt Far example, in the heavy-equipment lexicon, there are a few hundred terms out of 60,000 which have more than one sense per part of speech. ulary items, including symbols, abbreviations, and the like, ,are restricted in use and are chosen for the lexicon in collaboration with domain experts. Senses for prepositions, which are highly ambiguous and context-dependent, are determined (luring processing using the semantic domain model (of. Section 4).", "cite_spans": [ { "start": 141, "end": 163, "text": "[Mitamura etal., 1993]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The l)omain Lexicml", "sec_num": "2.1" }, { "text": "Nominal compounds in the domain may be several words long. Because of the potential ambiguity associated wit h compositional parsing of nominal compounds, non-productive nominal compounds are listed explicitly ill tile lexicon as idioms or nomenclature phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The l)omain Lexicml", "sec_num": "2.1" }, { "text": "Some constructions in the general source l,'nlgtmge that arc inherently ambiguous are excluded from the restricted grammar, since they may l~td to multiple analyses during processing: \u00ae Conjunction of VPs, ADJs, or ADVs e.g. *Extend and retract the cylinder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Pronominal reference, e.g. *Start the engine and keel) it running.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Ellipsis, e.g. reduced relative clauses: *the tools !~t for the procedure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Long-distance dependencies, snch as interrogatives and object-gap relative clauses, e.g. The parts which the service representative ordered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Nominal compounding which is not explicitly coded in the phrasal lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "On the other h,'md, tim grammar inchules the following constructions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Active, passive and imperative sentences, e.g. Start the engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Conjunction of NPs, PPs or Ss. Sentences may be conjoined using coordinate or subordinate con jr, notions, e.g. If you are on the last parameter, ~zen lhe program proceeds to the lop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "\u2022 Subject-gap relative clauses, e.g. The service representative can determine the parts which are faulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "Tile recommendations in tile controlled grammar include guidelines for authoring, such as how to rewrile a text from general English into the domain language. Authors are ad-'vised, for example, to choose the most concise terms available in the lexicon and to rewrite long, conjoined sentences into short, simple ones. The recommendations are useful both for rewriting old text and creating new text (set l:igure 2 for examples). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Controlled Grammar", "sec_num": "2.2" }, { "text": "The parts must be (easse~tbl#d. The set of markup tags for our applicatiou were developed in conjunction with do-,nain experts. A set of domain-specific tags is used not only to demarcate tile text but also to identify tile content of potentially ambiguous expressions, and to help during vocabl,lary checking. For example, at the lexical level, number tags identify numerals as diagram callouts, part munbers, product model numbers, or parts of measurement exl)ressions. At the syntactic level, rules for tag combinations restrict how phrases rnay be constructed, as with tagged part rmmbers an(l part names (see Figure 3 for an example).", "cite_spans": [], "ref_spans": [ { "start": 614, "end": 622, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Suggested Rewrite:", "sec_num": null }, { "text": "'['tie
4S152-1