{ "paper_id": "1991", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:35:10.890293Z" }, "title": "Preprocessing and lexicon design fo r parsing technical -text 1", "authors": [ { "first": "Robert", "middle": [ "P" ], "last": "Futrelle", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastem University", "location": { "addrLine": "360 Huntington Avenue Boston", "postCode": "02115", "region": "MA" } }, "email": "" }, { "first": "Christopher", "middle": [ "E" ], "last": "Dunn", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastem University", "location": { "addrLine": "360 Huntington Avenue Boston", "postCode": "02115", "region": "MA" } }, "email": "" }, { "first": "Debra", "middle": [ "S" ], "last": "Ellis", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastem University", "location": { "addrLine": "360 Huntington Avenue Boston", "postCode": "02115", "region": "MA" } }, "email": "" }, { "first": "Maurice", "middle": [ "J" ], "last": "Pescitelli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Northeastem University", "location": { "addrLine": "360 Huntington Avenue Boston", "postCode": "02115", "region": "MA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Technical documents with complex structures and orthography present special difficulties for current parsing technology. These include technical notation such as subscripts, superscripts and numeric and algebraic expressions as well as Greek letters, italics, small capitals, brackets and punctuation marks. Structural elements such as references to fi gu res, tables and bibliographic items also cause problems. We first hand-code documents in Standard Generalized Markup Lan gu age (SGML) to specify the document's logical structure (paragraphs, sentences, etc.) and capture significant orthography. Next, a regular expression analyzer produced by LEX is used to tokenize the SGML text. Then a token-based phrasal lexicon is used to identify the lon_ gest token sequences in the input that represent single lexical items. This lookup is efficient because limits on lookahead are precomputed for every item. After this, the Alvey Tools parser with specialized subgrammars is used to discover items such as floating-point numbers. The product of these 1 This work was supported by the Division of Instrumentation and Resources of the National Science Foundation, grant number DIR-88-14522. 31 preprocessing stages is a text that is acceptable to a full natural lan gu age parser. This work is directed towards automating th e building of knowledge bases from research articles in the field of bacterial chemotaxis, but the techniques should be of wide applicability. Stage 0: Obtain selected articles from primary biological literature, 1960-1990 Form 0: word complex-orthographic-item word word floating-point-number punctuation .... Stage 1: SGML encoding (tagging) while typing in article using SGML-based editor Form 1: sentence-start-tag word tagged-complex-item word word tagged-number .... Stage 2: Tokenization using regular-expression analyzer generated by LEX Form 2: SGML-symbol string complex-item-token ... tokens-for-number SGML-symbol .... \u2022 Stage 3: Lexicon lookup in token-based phrasal lexicon Form 3: fo und-item fo und-item fo und-item not-founds fo und-item not-founds .... Stage 4: Subgrammar analysis using Alvey syntactic and semantic tools Form 4: fo und-item fo und-item fo und-item analyzed-structure not-found Stage 5: Editor and lexicographer at the workbench resolve any remaining unknowns Form 5: found-item fo und-item fo und\ufffditem analyzed-structure added-to-lexicon Stage 6: Natural language parsing using Alvey GPSG-based tools Form 6: Parse trees and logical fo rm structures Stage 7: Building knowledge frames ....", "pdf_parse": { "paper_id": "1991", "_pdf_hash": "", "abstract": [ { "text": "Technical documents with complex structures and orthography present special difficulties for current parsing technology. These include technical notation such as subscripts, superscripts and numeric and algebraic expressions as well as Greek letters, italics, small capitals, brackets and punctuation marks. Structural elements such as references to fi gu res, tables and bibliographic items also cause problems. We first hand-code documents in Standard Generalized Markup Lan gu age (SGML) to specify the document's logical structure (paragraphs, sentences, etc.) and capture significant orthography. Next, a regular expression analyzer produced by LEX is used to tokenize the SGML text. Then a token-based phrasal lexicon is used to identify the lon_ gest token sequences in the input that represent single lexical items. This lookup is efficient because limits on lookahead are precomputed for every item. After this, the Alvey Tools parser with specialized subgrammars is used to discover items such as floating-point numbers. The product of these 1 This work was supported by the Division of Instrumentation and Resources of the National Science Foundation, grant number DIR-88-14522. 31 preprocessing stages is a text that is acceptable to a full natural lan gu age parser. This work is directed towards automating th e building of knowledge bases from research articles in the field of bacterial chemotaxis, but the techniques should be of wide applicability. Stage 0: Obtain selected articles from primary biological literature, 1960-1990 Form 0: word complex-orthographic-item word word floating-point-number punctuation .... Stage 1: SGML encoding (tagging) while typing in article using SGML-based editor Form 1: sentence-start-tag word tagged-complex-item word word tagged-number .... Stage 2: Tokenization using regular-expression analyzer generated by LEX Form 2: SGML-symbol string complex-item-token ... tokens-for-number SGML-symbol .... \u2022 Stage 3: Lexicon lookup in token-based phrasal lexicon Form 3: fo und-item fo und-item fo und-item not-founds fo und-item not-founds .... Stage 4: Subgrammar analysis using Alvey syntactic and semantic tools Form 4: fo und-item fo und-item fo und-item analyzed-structure not-found Stage 5: Editor and lexicographer at the workbench resolve any remaining unknowns Form 5: found-item fo und-item fo und\ufffditem analyzed-structure added-to-lexicon Stage 6: Natural language parsing using Alvey GPSG-based tools Form 6: Parse trees and logical fo rm structures Stage 7: Building knowledge frames ....", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The Biological Knowledge Laboratory focuses on the analysis of research articles in the field of bacterial chemotaxis (Futrelle, 1989 (Futrelle, , 1990b . We are building a corpus consisting of th e 1000 or so articles that make up the published record of the field since its inception in 1965. As the corpus is built we are attempting to use syntactic and semantic analysis to convert the corpus to a knowledge base. But the texts are complex --they have a superstructure that includes title, authors, abstract, sections, paragraphs, bibliography, etc.", "cite_spans": [ { "start": 118, "end": 133, "text": "(Futrelle, 1989", "ref_id": null }, { "start": 134, "end": 152, "text": "(Futrelle, , 1990b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "They also contain sub-and superscripts, italics, Greek letters, formulas, and references to fi gu res, tables, and bibliographic items. Another major component of technical documents that has been ignored is graphics, which requires its own analysis; we have a separate proj ect devoted to graphical analysi s and understanding (Futrelle, 1990a) .", "cite_spans": [ { "start": 328, "end": 345, "text": "(Futrelle, 1990a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "In this paper we describe procedures we have implemented and resources we have developed for preprocessing th ese complex documents. The preprocessing produces text which retains all important details of the-original but is in a form \u2022that a conventional natural language parser can use without major modifications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "The preprocessing software runs in part under Unix (for LEX) and in part under Symbolics Genera 8.0 using their Statice database system for the lexicon. \u2022 The Alvey Natural Language Toolkit (Briscoe, et al, 1'987) is used for the subgrammar analysis. We have used Alvey on the Symbolics, Suns and on Mac II' s. The systems described here are sentence-oriented, leaving to other software the task of organizing th e structures above the sentence level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "Most research on natural language processing is restricted to text which does not contain complex orthography or has had it stripped away. This has prevented the application of computational linguistics to most technical documents and technical documents are a_ huge and important repository of knowledge. Though our contribution is primarily a technical one, ' it is one that is sorely needed if progress is to be made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1." }, { "text": "To appreciate th e type of problems that arise in text analysis, consider the various uses of a punctuation mark, the period. In the sentence, \"Bacteria swim.\" \u2022 the item \"swim.\" that includes the period is not a word, it is the word \"swim\" followed by end-sentence punctuation. On the other hand, the period in \"etc.\" is not (necessarily) a sentence end marker. The period in \"7.3\", however, is an integral part of the number. The comma is normally used to mark phrases and clauses, but it is used as an integral part of the number \"32,768\" or the chemical name \"2,6-diaminohexanoic acid\" (the essential amino acid, lysine). Superscripts can play the role of an isotopic indicator, \" 3 H\" for tritium, or a footnote 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEMS AND THEIR SOLUTION", "sec_num": "2." }, { "text": "We have found a way to deal with all of these problems. The documents are first encoded (marked up) as they are entered by a trained editor/typist using an editor which supports the Standard Generalized Markup Language (SGML) (Bryan, 1988; van Herwijnen, 1990 ). The complex items in the marked-up text are then broken into their constituent tokens and selectively reassembled so that every token or contiguous sequence of tokens is resolved in some way. The resolution of a token sequence is done by first looking for the sequence in a phrasal lexicon. If found, the sequence is replaced by its lexical item. If a token sequence is not in the lexicon, an attempt is made to parse it using specialized subgrammars. If this fails, the item is flagged for analysis by a human editor or lexicographer to see if it is an error or a new lexical item.", "cite_spans": [ { "start": 226, "end": 239, "text": "(Bryan, 1988;", "ref_id": null }, { "start": 240, "end": 259, "text": "van Herwijnen, 1990", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEMS AND THEIR SOLUTION", "sec_num": "2." }, { "text": "The word \"salt\" is a single token entry in the lexicon. The sequence, \"sodium chloride\" is a two token entry. The item \"CO 2 \" which is represented by seven tokens is found as a single item-in the lexicon. But it is not appropriate to represent most numbers in the lexicon, because they form an essentially unbounded class 3 . For example, the number \"3.4x10 -8 \" (made up of 17 tokens) is not in the lexicon. It is analyzed by a subgrammar and found to be a legally formed number in scientific notation. The number is replaced by a structure which includes the lexical_ item \"$nurn$\", a noun which the natural language parser can deal with. Mer pr_ eprocessing, the text is passed on to a full natural language parser for syntactic and semantic (logical form) analysis. Currently, we use the GPSG-based parser from th e Alvey toolkit for both subgrammar analysis and full natural language parsing (Briscoe, et al, 1987; Ritchie, et al, 1987) .", "cite_spans": [ { "start": 898, "end": 920, "text": "(Briscoe, et al, 1987;", "ref_id": null }, { "start": 921, "end": 942, "text": "Ritchie, et al, 1987)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEMS AND THEIR SOLUTION", "sec_num": "2." }, { "text": "The proce ssing sequence is outlined in Figure 1 . Each stage can produce a file as output that can be the input to the next stage, so the analyses do not have to be synchronous. The preprocessing stages are stages 1-6.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 48, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "3 Certain numbers such as cell strain designators or the familiar \"Boeing 7 4 7\" would be in the lexicon. (#8879) . SGML specifies a system in which tags and entities can be defined and used so that an arbitrarily complex text can be translated to a standard form which uses only the ASCII character set so it can be disseminated widely and dealt with uniformly by a variety of systems.", "cite_spans": [ { "start": 106, "end": 113, "text": "(#8879)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "The encoding (markup) of the text is . done using an SGML editor th at makes th e process efficient and checks that the text complies with our SGML syntax specifications, e.g., no sentence-start tag can be entered until the previous sentence-end tag has been entered. For each class, the original ASCII representation has been preserved, either by including the string itself or using a Lisp symbol whose print representation is the ASCII representation. As an example, the outputs from tokenizing (4a) and (4b) are the 7 token sequence (5a) and the 20 token sequence (5b):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "(5a) \"Cells\" \"were\" \"suspended\" \"in\" \"medium\" \"containing\" (5b) (num \"3\") nws (\".\") nws (num \"05\") nws I × I nws (num \"10\") nws nws I − I nws (num \"2\") nws I µ I nws \"M\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "The white spaces in the original text have been complemented to yield the nws symbol to indicate that the tokenized elements were originally abutted. This is necessary for disambiguation of complex sequences, and it makes normal prose easier to read at this stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "Stage 3: Lexicon Lookup -At this point, a lexicon is consulted for each sequence of tokens contained in a title, section heading, sentence, etc. For our example, the token sequence generated from the full sentence (4) is handed to the lexicon lookup routine as the 73 token list,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "(\"Cells\" \"were\" \"suspended\" ... nws \"a\" nws nws (\"-\") nws \"methylaspartate\" ... (num \"8\") nws nws ( \".\")) (notice our ellipsis). The lexicon lookup stage attempts to match sequences of tokens from the input to items found in the lexicon. The lexicon is an extended phrasal lexicon, in which each lexical entry is a sequence of one or more tokens. Five typical lexical items are \"cells\" \"sodium chloride\" \"a-methylaspartate\" \"\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "Note that in the lexicon, the nws (no-white space) tokens are removed by concatenation for both storage and lookup. A lexical item L (one or more tokens) is a prefix if there are longer items in the lexicon (more tokens) with the same initial items as L. The first token of all items in the lexicon is listed as a separate entry. But some of these and some multiple token entries never function as independent stand-alone items and are noted as such in the lexicon. For example the SGML tag tokens \"\" and \"\"indicating that Greek and italicized characters follow never function as separate items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "To efficiently and reliably find multi-token items, certain information is precomputed and stored in the lexicon. For example, the items \"sodium\", \"chloride\", \"sodium chloride\", \"sodium bromide\" \"sodium iodide\" might all appear in the lexicon. When \"sodium chloride\" appears in th e source text, it is that two-item entry that we want identified, not the two separate words. To assure that this happens the prefix list ((3 2)) is computed and attached to \"sodium\". This says th at there are 3 items of length 2 that begin with \"sodium\", so the next item in the source, \"chloride\" is attached and the two-word item is found and returned by the lexicon lookup. Prefix lists can be complex, forming trees rooted at the initial item. The prefix lists prevent the search for a single item from continuing to the end of the sentence, because they put explicit bounds on the lengths of all items that could possibly match, given any prefix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "The output from the lexicon lookup stage for (6) is the list", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(7a)", "eq_num": "( 7b )" } ], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "\"Cells\" \"were\" \"suspended\" \"in\" \"medium\" \"containing\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "(?? ((num \"3\") The third is a bibliographic reference. The \".\" in the number in (7b) and the \".\" at the end of the sentence in (7e) are recognized since \".\" is a stand-alone item. it is a prefix for entries such as \"3H-ethanol\" but it is not stand-alone, so it is included in the unknown in (7b). Note th at the strings which are the lexicon identifiers for complex items such as the chemical name in (7d) retain their original SGML markup, without the no-white-space symbols introduced by tokenization. In an interactive system, these items could be presented on a screen by interpreting the markup according to a style specification and producing the indicated orthography, e.g., a-methylaspartate.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 14, "text": "((num \"3\")", "ref_id": null } ], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "Stage 4: Sub grammar analysis -The reaso. n that the three unknown items were unrecognized in th e previous step is that they were parts of lexical items th at belong to two of the unbounded classes of lexemes. The job of the subgrammar is to analyze this type of unknown which can include numbers, number ranges, simple ratios, references and page numbers. Each class has an associated structure for representing its instances. In our previous example we had two unknown token sequences and one lexical item, which when taken together correspond to the number 3.05x10 -2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "(8) (?? ((num \"3\") nws)) (?? (nws (num \"05\") nws I × I nws (num \"10\") nws nws I − I nws (num \"2\") nws ))", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 18, "text": "(?? ((num \"3\")", "ref_id": null } ], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "We have written a context-free grammar to recognize this token stream as a number in scientific notation and place a structure in the output stream of the general form (9) (\"$num$\" SGML-string Lisp-num-fo rm)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "For our example (8) this would result in:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "(10) (\"$num$\" \"3.05×10− 2\" 3.05E-2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "The number structure consists of three fields. The first, \"$num$\", is a lexical item, the noun which represents all numbers. The parser for doing the later syntactic analysis of this sentence will access the feature-value list associated this noun. The second field contains the SGML encoding of the number. This can be used for displaying the number on th e screen. The third field contains a Lisp readable form of the number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "Another structure recognized by subgrammar analysis is the bibliographic reference, (7 e). \u2022 The structure produced by the analysis has the form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROCESSING SEQUENCE", "sec_num": "3." }, { "text": "When the token sequence from (7e) is recursively analyzed, the result is (12) (\"$bibref$\" \"8<1RB>\" ((\"$num$\" \"8\" 8)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "In this example, th e bibliographic reference structure contains a number structure. In general, any sequence of lexical items, structures and unrecognized token streams can be placed in the List-of-contents for bibliographic references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "Subgrammar analysis of expressions such as (8) involves first creating a stream without the \"??\" tokens and without the actual integers (\"3\", \"05\", \"10\" and \"2\") and with th e \"ordinary\" words replaced by simple placeholders, e.g. , \"$word$\". Critical elements such as nws, I − I , etc. are retained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "Once this simplified stream is available, the parse is done according to th e subgrammar specialized for numbers, bibliographic references, etc. But the output of the subgrammar analysis must produce a new stream which includes forms such as in 10and 12 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "\"Commonsense knowledge is discussed in (Davis, 1990) .\"", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 52, "text": "(Davis, 1990)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "In th e full natural language parsing (Stage 6) there\u2022 will be additional categories and grammar rules to allow such structures to be treated properly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "Wh en th e translator generated by the semantic interpretation of the subgrammar parse is applied to (7a-e), the final form which results is (15a) ( \"Cells\" \"were\" \"suspended\"\"in\" \"medium\" \"containing\" (15b) (\"$num$\" \"3.05×10minus;2 \" 3.05E-2) \"µM\" (15c) \"L-[ methyk/IT> : 3H]-methionine\" \",\" (15d) \"a-methylaspartate\" (15e) \"and\" \"AIBU\" (\"$bibref$\" \"8<1RB>\" ((\"$num$\" \"8\" 8))) \".\" )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "This preserves all of the details of th e original text. Every form is an item or contains an item th at can be found in the lexicon and one that will allow a proper screen display (cf. (16) . below). Lisp forms of numbers and citation information are also available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "The subgrammars are simple and deterministic so the parses are fast compared to the later full natural language parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(11) (\"$bibref$\" SGML-string List-of-contents)", "sec_num": null }, { "text": "Natural language parsing cannot be done until all items are resolved by the lexicon, so unknown items are passed on to the editor and the lexicographer (humans). Errors in the original source and errors in our own re-en try can be caught at this stage. What remain are items that need to be added to the lexicon. These additions are made using th e Lexicographer's Workbench which is currently under development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 5: The Lexicographer's Workbench", "sec_num": null }, { "text": "In the Workbench a collection of analytical tools and heuristic procedures are used to tentatively classify new items which are th en presented to th e lexicographer for simple approval or more rarely for special treatment. Morphological analysis is useful, e.g., certain classes of enzyme names have the suffixes \"tasell or \"ase\" as in \"phosphatase\" or \"nuclease\". This means that new words can be analyzed and suggestions made as to their classificati_on. Alvey has a sophisticated morphological analysis package which we are experimenting with in which the rules are user . definable (Ritchie, et al 1987) .", "cite_spans": [], "ref_spans": [ { "start": 587, "end": 608, "text": "(Ritchie, et al 1987)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Stage 5: The Lexicographer's Workbench", "sec_num": null }, { "text": "One difficult task is the identification of new phrasal items, a difficulty emphasized by Am sler (Amsler, 1989) . For example, consider the case in which . \"sodium\", \"chloride\", \"bromide\" and \"sodium chloride\" are in th e lexicon but \"sodium bromide\" is not. If \"sodium bromide\" appeared in th e input it would not even be flagged as an unknown. .Nevertheless, we would want the Workbench to be provided with the heuristic that chemical name sequences are most likely chemical names themselves.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 112, "text": "(Amsler, 1989)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Stage 5: The Lexicographer's Workbench", "sec_num": null }, { "text": "Thus the workbench would make the decision itself and insert \"sodium bromide\" in the lexicon with the proper feature/value specs. This decision would, as all others, be subject to review by the lexicographer or application field specialist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 5: The Lexicographer's Workbench", "sec_num": null }, { "text": "Stage 6: Natural \u2022language parsing -When the lexical items are extracted fr om (15), the result is (16a) ( \"Cells\" \"w\ufffdre\" '\"'suspended\" \"in\" \"medium\" \"containing\" Then the logical forms produced by parsing would be used as input to a system which generates instances of the appropriate knowledge frames representing the sentences. (This is also work in progress.) Furthermore, these knowledge frames can be connected together into superstructures representing coherent arguments for or against a given proposition. Taken together, these frame instances and their connecting fr ames compose the knowledge base which would underlie our \"Scientist's Assistant\" system, a system for answering both general and specific queries about the contents and arguments that are to be found in our corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stage 5: The Lexicographer's Workbench", "sec_num": null }, { "text": "Because of the complexities of technical text notation and the availability of a comprehensive standard, we decided to use SGML for text markup. Then we designed a token-based phrasal lexicon for resolving the complex items generated by the markup. This lexicon is robust because it handles everything from simple words to complex multi-word chemical names containing Greek letters, commas, superscripts and more. In addition, our subgrammar analysis handles unbounded class items that cannot be accommodated in th e lexicon such as numbers in scientific notation and bibliographic references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DISCUSSION", "sec_num": "4." }, { "text": "The work closest to ours is the preprocessing done for the LOB corpus (Booth , 1987) . Unfortunately, the SGML standard was not available to that project at the time, so they had to invent their own orthographic coding schemes and a pre-editing phase similar to ours to break the text into taggable units. There are many differences between the projects. One of these is in the design of the lexicon. The LOB group decided to develop a compact lexicon which includes only the base forms. Possessives or contracted forms such as \"Smith's\" or \"it's\" are not included. Because secondary storage is rapidly becoming less expensive and because modem database and file structure designs allow very rapid access to large lexicons we have opted for a very \"flat\" lexicon in which every variant form encountered in the corpus is stored as a separate entry. This includes \ufffdapitalized words appearing at the beginning of sentences, etc. We add the variants of the base forms to the lexicon only when they are found in our corpus. Our own statistical analysis of large corpora such as the Brown Corpus show th at the inclusion of these variant forms will probably add no more than 50% to the lexicon size over a lexicon that has only the base forms.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 84, "text": "(Booth , 1987)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "DISCUSSION", "sec_num": "4." }, { "text": "If we had only included base forms then other difficulties would crop up in attempting to map between found entities and the base forms. We avoid these difficulties by including the variant forms and flagging them to indicate their usage restrictions. We would flag \"There\" as a form only expected as a sentence initial (and fully equivalent to \"there\") whereas \"DNA\" would only be expected in fully capitalized form. McQueen and Burnard, 1990) . They have been focusing on text in the humanities so they have been concerned with a different set of problems such as encoding verse, stage directions, foreign language quotations, etc. Neither the TEI not the LOB groups seemed to have directly faced the issues of how to interface the marked up text with the available parsing technology as we have.", "cite_spans": [ { "start": 418, "end": 444, "text": "McQueen and Burnard, 1990)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "DISCUSSION", "sec_num": "4." }, { "text": "SGML allows the user to design their own set of tags, entities and rules so we had to make some design deci \ufffdions. Our design is constructed pragmatically to make it usable by an editor/typist who is not a scientist. For instance, we have used a special tag \"\" for a bibliographic reference which might be represented by a superscript or by th e conventional \"(Shepard, 1978)\". And we have opted to use the simple superscript tag \"\" for both algebraic exponents as in \"3.05x10 -2 \" and isotope indicators as in \" 3 H\". The subgrammar and lexicon lookup, respectively, resolve these latter two items. This allows the typist to encode source text primarily on the basis of its appearance, rather than its semantic (scientific) content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Another major activity in text encoding is the Text Encoding Initiative or TEI (Sperberg", "sec_num": null }, { "text": "We are constantly asked why we do not use OCR techniques (optical character recognition) or go directly to publishers for electronic versions of the papers in our corpus. Again, these are pragmatic decisions, peculiar to this point in time. Because OCR error rates are still relatively high, especially for technical text, and because OCR systems do little or no markup, we can produce accurate transcriptions and markup more cost effectively by having a skilled typist/editor rekey the text. Most of our corpus (covering 30 years) does not exist anywhere in electronic form, and the wide variety of proprietary schemes used by printing firms for electronic typesetting is a nightmare to untangle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Another major activity in text encoding is the Text Encoding Initiative or TEI (Sperberg", "sec_num": null }, { "text": "In the fu ture, technical word processing systems will be developed that will allow scientist authors to enter their text with the proper logical tagging but without the system obtruding on th eir work. The systems we are developing will be able to take advantage of such electronic documents as they become available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Another major activity in text encoding is the Text Encoding Initiative or TEI (Sperberg", "sec_num": null }, { "text": "Many authors -have argued cogently and at length that multi-word items, idioms, punctuation and other complexities of real text require a comprehensive approach (Becker, 1975; Besemer and Jacobs, 1987; Amsler, 1989; Nunberg, 1988 Nunberg, , 1990 . The methods described here can serve as a foundation for any comprehensive system\u2022that must deal with the lexical, syntactic and semantic aspects of real-world technical text.", "cite_spans": [ { "start": 161, "end": 175, "text": "(Becker, 1975;", "ref_id": null }, { "start": 176, "end": 201, "text": "Besemer and Jacobs, 1987;", "ref_id": null }, { "start": 202, "end": 215, "text": "Amsler, 1989;", "ref_id": null }, { "start": 216, "end": 229, "text": "Nunberg, 1988", "ref_id": null }, { "start": 230, "end": 245, "text": "Nunberg, , 1990", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Another major activity in text encoding is the Text Encoding Initiative or TEI (Sperberg", "sec_num": null }, { "text": "... or a bibliographic reference, as in, \"Smith found this effect earlier 7 .\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank John Carroll and Claire Grover for discussions of the Alvey tools, including the semantic component.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENTS", "sec_num": null }, { "text": ". . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Amsler, Robert A 1989. Research Toward the Development of a Lexical Knowledge Base for Natural \u2022 Language Processing.", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Compilers; Principles, Techniques, and Tools", "authors": [ { "first": "A", "middle": [], "last": "Aho", "suffix": "" }, { "first": "R", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "J", "middle": [], "last": "Ullman", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, A.; Sethi, R. and Ullman, J\u2022 . 1986. Compilers; Principles, Techniques, and Tools. Addison-Wesley Publishing Company, Inc., Reading, MA", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Schematic view of the successive stages of corpus processing. \"Form n\" lists typical items in the stream of text which result fr om the processing in Stage n and are the input to Stage n+ 1. There is not an absolutely tight correspondence between the items in successive forms in \u2022th is figure, due to the complexity of the analysis. The underlined stages denote th e preprocessing stages which are currently implemented and explained in some detail in this paper.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "as well as all of the original words. To do this we take advantage of the compositional semantics built into the Alvey parser. The semantic attachment facilities in Alvey allow references to daughter nodes by number and the inclusion of simple lambda forms. But in addition, arbitrary lisp forms can be included", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "of the scientific prose in our corpus. This is work in progress. A semantics for this large grammar is under development (C. Grover, personal communication). In addition, a more efficient, LR( 1) parser is being built to improve the", "uris": null }, "TABREF0": { "num": null, "html": null, "type_str": "table", "content": "
from
", "text": "the American Association of Publishers' (AAP) set, the Electronic Manuscript Standard (EMS): with the addition of our own. user defined tags such as the sentence tags, and . SGML is an ISO standard" }, "TABREF1": { "num": null, "html": null, "type_str": "table", "content": "
(3e)and AIBU8.
Here is the SGML encoding of the example
sentence:
(4a)<U.S>Cells were suspended in medium
containing
(4b)3.05&times;10<SUP>
&minus;2<1SUP>&micro;M
( 4c)<SCP>L<ISCP>-[ <IT>methyl<IIT>-
<SUP>3<ISUP>H]-methionine,
(4d) <GK>a<IGK>-methylaspartate
(4e)and AIBU <RB>8<1RB>.<IU.S>
The particular system we use is Author/Editor
(Softquad, Toronto, Canada) running on Mac
II's .
The example sentence -Here is the example
sentence we will use to illustrate our
preprocessing strategy. It is first presented as
it might appear in a research article source,
but laid out for easy comparison with the
SGML form which follows:
(3a)Cells were suspended in medium
containing
(3b)3.05x10-2 \u00b5M
(3c)L-[methyl _ 3 HJ-methionine,a parenthesized pair
(3d)a-methylaspartate(for numbers), not just a contiguous sequence of non-blank characters.
Input ClassOutput FormatExample Output
ASCII text stringsstring-\"Cells\"
numbers(num string)(num \"05\")
special characters(string)(\". \") (\",\") (\"(\")
SGML tagsymbol<U.S>
SGML entitysymboll&micro; I
no-white-spacenwsnws
", "text": "The input and output forms for the tokenization stage, Stage 2." } } } }