{ "paper_id": "C69-0701", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:32:54.837424Z" }, "title": "", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Szanser", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Physical I~boratory", "location": { "settlement": "Teddington", "country": "England" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic error-correction in natural language processing is based on the principle of 'elastic matching'. Text words are segmented into 'lines' with letters arranged according to a predetermined sequence, and then matched line-by-line, shifts being applied if the numbers of lines are unequal.", "pdf_parse": { "paper_id": "C69-0701", "_pdf_hash": "", "abstract": [ { "text": "Automatic error-correction in natural language processing is based on the principle of 'elastic matching'. Text words are segmented into 'lines' with letters arranged according to a predetermined sequence, and then matched line-by-line, shifts being applied if the numbers of lines are unequal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In order to resolve the possible multiple choices produced, the method may be supplemented by another one, based on the observed repetition of words in natural texts, and also by syntactic analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This paper describes the above methods and gives an account of an experiment now in progress at the National Physical Laboratory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Elastic matchin~ ~ith increased application of computers in the processing of natural languages comes the need for correcting errors introduced by human operators at the input stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "A statistic investigation [1] revealed that roughly 80 per cent of all misspelled words contain only one error, belonging to one of the following cases: a letter missing, an extra letter, a wrong letter and finally two adjacent letters interchanged.", "cite_spans": [ { "start": 26, "end": 29, "text": "[1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "As such an error can occur in any position, a check by trying all possible alternatives in turn is clearly impracticable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "A method which can obtain the same result but in a less tedious and time-constming way has been worked out and experimented upon at the National Physical Laboratory, Teddington, England.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "This method, named'elastic matching' was first proposed at the 1968 I.F.I.P. Congress in Edinburgh, Scotland [2] .", "cite_spans": [ { "start": 109, "end": 112, "text": "[2]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "The elastic matching of words consists basically of coding all the characters (letters) as bits in a computer word, allotting to each letter a specific position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "The whole English alphabet will therefore be represented by a sequence of 26 bits, although their order, as will be shown below, may, and indeed should, differ from the usual order of letters in the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "All words belonging to a complete set, which may be a list of words or a whole dictionary, are 'linearized', that is converted into segments, called 'lines', in which the letters are arranged in the agreed order\u00b0 if the current letter has a position prior to the last stored, a new line must be started. Thus, if the sequence in question were the alphabet itself, the word 'interest' (for example) would be linearized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "'int-er-est'. The actual sequence, by the way, has to be chosen in such a way that it would produce the longest possible lines or, in other words, the minimum number of lines for a given sample of text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "The matching is carried out not between words but between lines. ~ll errors will then stand out immediately as one or more disagreeing bits ~. In the case of two bits a simple check will reveal whether this is the result of an accepted type of error (one wrong letter, or two adjacent letters interchanged), or the result of two separate errors, and therefore to be rejected under the limit accepted (one error per word).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "In the examples shown below the alphabet has been assumed to be the linearizing sequence;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "this is done for the sake of better clarity only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "M 0 RST I f I R T Y ! Mo I i D I Y disagreeing bit .~ .............. ...~ ! (a) ~Extra letter (b) Letter missin~ B H N T B E N T J J J B N ST I B E N T i I (c) Wrong letter (d) Two errors (unacceptable) t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "The result (d) is unacceptable because the two disagreeing bits For example by using the logical 'NOT-EQUIVALENT' operation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "are formed as a result of two errors (extra S and missing E). In the computer check this is shown by the two outstanding bits (letters) being separated by another bit (letter).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "If the numbers of lines in the two versions (misspelled and correct) are unequal, the procedure is as follows. The next line of the longer version is shifted back and matched against the result (that is, the disagreeing bits) of the previous match. Thus, for example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "I M 0 U M 0 ST I o ST", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "(an extra letter)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "In the case of two disagreeing bits some simple checks have again to be made to eliminate the two-error cases, and also to prevent spurious matches resulting from the self-cancellation of characters between the two successive lines of the same version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "More particulars of the operation of this method can be found in a special paper [5].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "I.", "sec_num": null }, { "text": "The dictionary organization The elastic matching, as was mentioned above, is applicable against any set of (correct) words, which may be, for example, a list of proper names, or any other words, even of artificial (e.g. programming) languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "It is, however, the application to natural languages, in particular English, which is the subject of this paper. There are two problems which have to be overcome or, at least, reduced to manageable proportions before this method can be applied using a complete English dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The first problem is access to the dictionary which may contain tens or even hundreds of thousands of entries. This number, however, includes all grammatical forms of English words (fortunately, they are not so numerous as in highly inflected languages such as Russian).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The dictionary look-up takes different forms depending on the way in which the dictionary is organized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The latter could have either a tree-like structure (preferably built of 'lines') which is likely to be quicker in operation, or a list structure, in which words may be grouped by their line numbers, then by numbers of letters and finally, if the lists are still too long, by part-alphabetization (according to the accepted sequence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "This structure is easier to prepare.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The words to be checked against the dictionary of the list structure will be linearized, and during this process the numbers of their lines and letters will be determined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The sections of the dictionary to be used in the matching process will be those with equal numbers of lines (and letters) and those ir~,ediately below and above these numbers (depending on the error threshold accepted).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The other problem is connected with the number of multiple matches likely to occur, especially for short words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Two ways of alleviating this problem are described in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The supplementary procedures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3-", "sec_num": null }, { "text": "One possibility of choosing between the multiple equivalents produced by dictionary look-up is to select those which are repeated throughout the article or speech in question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "For this purpose a procedure called 'general-content check' has been devised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "~s the text is processed, each different word satisfying certain conditions is stored.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "Then all multiple results from dictionary look-up are compared with the contents of this store (which may also be organized into sections) and words found there are given preference to others. The idea behind this is, of course, that words tend to be repeated by one writer or speaker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "The size of the sample processed for the general-content check must not be too small or too large. The optimum size should be determined experimentally, but one may risk the guess that perhaps 1-2 thousand (current) test words are a practical amount.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "Further, there is no need to store all the different words. Ideally, these should be the so-called'content' words, such as nouns, verbs, adjectives and adverbs, whereas the remaining, 'function' words (prepositions, conjunctions, etc.) should be left aside, as not being content-typical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "The selection can easily be done in the storing process if dictionary entries are suitably marked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "Also, if one grammatical form of a word is stored, there is no need for storing others, so that the general-content vocabulary may assume the character of a stem-word list. This again, can conveniently be arranged both in storing and in matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ~eneral-content check", "sec_num": "3.1" }, { "text": "2mother possibility of making a choice between multiple equivai[ents is syntactic ~alysis. This is especially promising, because if one consider~ a typical lexical set of common words, one must notice that long words (which give, as a rule, better results in elastic matching) usually belong to Icontent' words, whereas the 'function' words, which are specially amenable to syntactic analysis are normally short and, therefore, would either produce more multiple choices er~ if of less than four letters, would escape the elastic matching ~together*.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S~xntactic analysis", "sec_num": "3.2" }, { "text": "In this way the two methods are largely complementary. More of syntactic analysis in error-correction will be said below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S~xntactic analysis", "sec_num": "3.2" }, { "text": "Neither of the two supplementary methods mentioned above is applicable where elastic matching is used for non-textual material (list of names, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S~xntactic analysis", "sec_num": "3.2" }, { "text": "An experiment has been carried out at the NPL on the lines described above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "First of all, an optimum linearizing sequence had to be established for English texts. Several methods were used for this purpose, both statistical and purely linguistic, and the results were submitted to computer tests. Sequences bringing lower yield had been gradually eliminated and changes were made in those remaining, in order to determine the optimum sequence by the well-known lhillclimbing' technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "This investigation has been fully described elsewhere [3] and it has produced the following sequence: FJVWMBPHIOQUEARLNXGSCKTDYZ Next, through lack of a proper dictionary, the general-content check procedure was used to compile lists of words occurring in selected stretches of English (parts of three articles on physics, linguistics and secio-politics, containing about 3,000 text words in ~l).", "cite_spans": [ { "start": 54, "end": 57, "text": "[3]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "This limit has been accepted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "Several hundred distorted words (based on words in the s~ne articles) were matched against these vocabularies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "After all the corrections and adjustments, the need for which naturally occurred during the tests, have been made, the final results can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "(i) The retrievals were both exact and complete, in the sense that no misspelled words (within the proper error limit) were left unretrieved and no wrong retrievals were produced;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "(ii) The number of multiple equivalents increased rapidly as the lower limit of the number of letters (four) in a word was approached (in some cases up to five equivalents);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "(iii) The number of multiple equivalents was generally insignificant for 'content' words (in most cases only one word was retrieved), whereas 'function' words often produced many equivalents, e.g. WTH~'--,THEY, OTHER, THEN, T}~M, TImElY, TI~IR All these observations confirmed the results anticipated in previous sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "The latest stage of the experiment is being carried out at the time of writing this paper (May, 1969) . The author is now able to use the English side of the Palantype -'_iuglish dictionary ~ of about 80,000 entries. For the sake of economy in programming and machine time, only one section of the dictionary, namely the entries starting with letter S (about I~% of the whole dictionary) is being used. The linearization and organization of this section is now in progress. This will enable the author to test a more complete dictionary lookup than before, together with general-content check and later with syntactic analysis as well.", "cite_spans": [ { "start": 90, "end": 101, "text": "(May, 1969)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "c~experiment in automatic error-correction", "sec_num": "4." }, { "text": "Other applications", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.", "sec_num": null }, { "text": "Apart from the general use for natural English texts, an application of the elastic m!~tching technique has been proposed in the automatic tra!!scription of machine-shorthand of the Palantype system. This system uses a special machine with a keyboard enabling the simultaneous striking of several keys, each 'stroke' corresponding to a phonetically-based group of consonants and vowels, roughly equivalent to a syllable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "In normal operation all the characters of each stroke are printed together on a continuous paper band, shifting after each This will be explained below, Section 5-stroke. The recording is later read and transcribed by a human operator. Since the latter part of the operation is naturally much slower (about four times that of the recording), a project, now in progress at the NPL, aims at securing automatic transcription, in which the character levers, in addition to the ordinary printing action, activate electric contacts. These create impulses, which are fed into a computer and result, after a series of operations, in printing out a text as new to ordinary English, as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "One of the problems encountered in this process is caused by the flexibility of the recording conve~tion, enabling the human operator to record phonetic combinations in more than one way. Generally, this is provided for by inserting in the automatic Palantype-~haglish dictionary all versions of each word that can be reasonably foreseen. In practice the Unforeseen sometimes happens and the word is output untranslated (but 'transliterated' phonetically), which is at the best annoying, but may even be unreadable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "An analysis has shown that most of the deviations from standard versions stored in the dictionary are caused by a few convention rtules, such as e.g. 'vowel elision':", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "any unaccented vowel in a word can be omitted. Now, if the matching is done not on palantype strokes but on their linearized versions, the elastic matching rules can easily be adjusted to include the versions produced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "Incidentally, the Palantype sequence is ~Iready partly linearized, and reads: SCPTH + M~LYOEAUI . NLCF~RPT + SH (the \"+\" and \".\" signs have special phonetic functions). For the linearization purposes all that is needed is to exclude the repeated consonants (from second \"N\" to the end); the number of lines will therefore exceed the number of 'strokes'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "The relevant procedures have been fully tested on sample lists of standard and non-standard versions (containing up to 300 words) and were found satisfactory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "The implementation, however, for use with the full dictionary remains to be done. It is still not clear whether it would repay to linearize and store in this form the complete dictionary of eighty odd thousand entries; or whether it would be more practical to linearize while checking, stroke by stroke, which would be, of course, a much slower procedure. At the present time it does not look likely that either solution would lead to standardization being possible in 'real-time', but there remains the possibility of an 'errata' sheet being produced almost immediately after the normal output. More particulars about this application can be found in the paper [4].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.1", "sec_num": null }, { "text": "Another application, now under consideration, is the retrieval of misspelled proper names from lists used in a factretrieval project, which is also in progress at the NPL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5.2", "sec_num": null } ], "back_matter": [ { "text": "Independently of the above, the application of syntactic analysis for the resolution of ambiguities (multiple equivalents) will be studied. A very limited syntactic check has already been theoretically worked out and proposed for the resolution of ambiguities in the automatic transcription of machine-shorthand. This is limited to the inspection of only the adjacent words; since, however, in the Palantype system speed is all important, the limitations brought by this condition may still be worth incorporating in the system. The work described above has been carried out at the National Physical Laboratory, Teddington, England.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null }, { "text": "Further plansOnce the work on the English dictionary section is completed it is hoped that the results will be extended to the full dictionary, and give the correct idea of the size of the problem and the times involved in the operation of the system. Also the question of the most reasonable dictionary organisation will find an answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Technique for computer detection and correction of spelling errors", "authors": [ { "first": "F", "middle": [ "J" ], "last": "Damerau", "suffix": "" } ], "year": 1964, "venue": "Comm. A.C.M. ~", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F.J. Damerau, \"Technique for computer detection and correction of spelling errors\", Comm. A.C.M. ~, (3), 1964.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Error-correcting methods in natural language processing", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Szanser", "suffix": "" } ], "year": 1968, "venue": "IFIP Congress", "volume": "68", "issue": "", "pages": "15--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.J. Szanser, \"Error-correcting methods in natural language processing\", IFIP Congress 68, Edinburgh,,Lugust, 1968 (Booklet H, pp 15-19).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Error-correcting methods in natural language processing -II. Standardization of variants in the Palantype automatic transcription", "authors": [], "year": 1968, "venue": "COM.SCI. T.M. 12~ National PhysiealLaboratory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\"Error-correcting methods in natural language processing -I. Optimum letter sequence for longest strings in English\", COM.SCI. T.M. 12~ National PhysiealLaboratory, Teddington, England, May 1968. \"Error-correcting methods in natural language processing -II. Standardization of variants in the Palantype automatic transcrip- tion\", CO~LZ~CI. T.M. 16, April 1969. \"Error-correcting methods in natural language processing -III. 'm~lastic matching' technique in the processing of F~uglish\", COM.SCI. T.M. 21, April 1969 .", "links": null } }, "ref_entries": {} } }