{ "paper_id": "P98-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:16:43.219856Z" }, "title": "Towards a single proposal in spelling correction", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "", "affiliation": { "laboratory": "", "institution": "Systems University of the Basque Country", "location": { "addrLine": "649 P. K", "postCode": "E-20080", "settlement": "Donostia, Basque Country" } }, "email": "" }, { "first": "Koldo", "middle": [], "last": "Gojenola", "suffix": "", "affiliation": { "laboratory": "", "institution": "Systems University of the Basque Country", "location": { "addrLine": "649 P. K", "postCode": "E-20080", "settlement": "Donostia, Basque Country" } }, "email": "" }, { "first": "Kepa", "middle": [], "last": "Sarasola", "suffix": "", "affiliation": { "laboratory": "", "institution": "Systems University of the Basque Country", "location": { "addrLine": "649 P. K", "postCode": "E-20080", "settlement": "Donostia, Basque Country" } }, "email": "" }, { "first": "Atro", "middle": [], "last": "Voutilainen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": { "postBox": "P.O. Box 4 FIN", "postCode": "00014", "settlement": "Helsinki", "country": "Finland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The study presented here relies on the integrated use of different kinds of knowledge in order to improve first-guess accuracy in non-word context-sensitive correction for general unrestricted texts. State of the art spelling correction systems, e.g. ispell, apart from detecting spelling errors, also assist the user by offering a set of candidate corrections that are close to the misspelled word. Based on the correction proposals of ispell, we built several guessers, which were combined in different ways. Firstly, we evaluated all possibilities and selected the best ones in a corpus with artificially generated typing errors. Secondly, the best combinations were tested on texts with genuine spelling errors. The results for the latter suggest that we can expect automatic non-word correction for all the errors in a free running text with 80% precision and a single proposal 98% of the times (1.02 proposals on average). i lspell was used for the spell-checking and correction candidate generation. Its assets include broad-coverage and excellent reliability.", "pdf_parse": { "paper_id": "P98-1003", "_pdf_hash": "", "abstract": [ { "text": "The study presented here relies on the integrated use of different kinds of knowledge in order to improve first-guess accuracy in non-word context-sensitive correction for general unrestricted texts. State of the art spelling correction systems, e.g. ispell, apart from detecting spelling errors, also assist the user by offering a set of candidate corrections that are close to the misspelled word. Based on the correction proposals of ispell, we built several guessers, which were combined in different ways. Firstly, we evaluated all possibilities and selected the best ones in a corpus with artificially generated typing errors. Secondly, the best combinations were tested on texts with genuine spelling errors. The results for the latter suggest that we can expect automatic non-word correction for all the errors in a free running text with 80% precision and a single proposal 98% of the times (1.02 proposals on average). i lspell was used for the spell-checking and correction candidate generation. Its assets include broad-coverage and excellent reliability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The problem of devising algorithms and techniques for automatically correcting words in text remains a research challenge. Existing spelling correction techniques are limited in their scope and accuracy. Apart from detecting spelling errors, many programs assist users by offering a set of candidate corrections that are close to the misspelled word. This is true for most commercial word-processors as well as the Unixbased spelling-corrector ispelP (1993) . These programs tolerate lower first guess accuracy by returning multiple guesses, allowing the user to make the final choice of the intended word. In contrast, some applications will require fully automatic correction for general-purpose texts (Kukich 1992) . It is clear that context-sensitive spelling correction offers better results than isolated-word error correction. The underlying task is to determine the relative degree of well formedness among alternative sentences (Mays et al. 1991) . The question is what kind of knowledge (lexical, syntactic, semantic .... ) should be represented, utilised and combined to aid in this determination. This study relies on the integrated use of three kinds of knowledge (syntagmatic, paradigmatic and statistical) in order to improve first guess accuracy in non-word context-sensitive correction for general unrestricted texts. Our techniques were applied to the corrections posed by ispell.", "cite_spans": [ { "start": 451, "end": 457, "text": "(1993)", "ref_id": null }, { "start": 704, "end": 717, "text": "(Kukich 1992)", "ref_id": "BIBREF6" }, { "start": 937, "end": 955, "text": "(Mays et al. 1991)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Constraint Grammar (Karlsson et al. 1995) was chosen to represent syntagmatic knowledge. Its use as a part of speech tagger for English has been highly successful. Conceptual Density (Agirre and Rigau 1996) is the paradigmatic component chosen to discriminate semantically among potential noun corrections. This technique measures \"affinity distance\" between nouns using Wordnet (Miller 1990 ). Finally, general and document word-occurrence frequency-rates complete the set of knowledge sources combined. We knowingly did not use any model of common misspellings, the main reason being that we did not want to use knowledge about the error source. This work focuses on language models, not error models (typing errors, common misspellings, OCR mistakes, speech recognition mistakes, etc.). The system was evaluated against two sets of texts: artificially generated errors from the Brown corpus (Francis and Kucera 1967) and genuine spelling errors from the Bank of EnglishL The remainder of this paper is organised as 2 http://titania.cobuild.collins.co.uk/boe_info.html follows. Firstly, we present the techniques that will be evaluated and the way to combine them. Section 2 describes the experiments and shows the results, which are evaluated in section 3. Section 4 compares other relevant work in context sensitive correction.", "cite_spans": [ { "start": 19, "end": 41, "text": "(Karlsson et al. 1995)", "ref_id": "BIBREF4" }, { "start": 183, "end": 206, "text": "(Agirre and Rigau 1996)", "ref_id": "BIBREF0" }, { "start": 379, "end": 391, "text": "(Miller 1990", "ref_id": "BIBREF8" }, { "start": 894, "end": 919, "text": "(Francis and Kucera 1967)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The basic techniques", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1", "sec_num": null }, { "text": "Constraint Grammar was designed with the aim of being a language-independent and robust tool to disambiguate and analyse unrestricted texts. CG grammar statements are close to real text sentences and directly address parsing problems such as ambiguity. Its application to English (ENGCG 3) resulted a very successful part of speech tagger for English. CG works on a text where all possible morphological interpretations have been assigned to each word-form by the ENGTWOL morphological analyser (Voutilainen and Heikkil~i 1995) . The role of CG is to apply a set of linguistic constraints that discard as many alternatives as possible, leaving at the end almost fully disambiguated sentences, with one morphological or syntactic interpretation for each word-form. The fact that CG tries to leave a unique interpretation for each word-form makes the formalism adequate to achieve our objective.", "cite_spans": [ { "start": 495, "end": 527, "text": "(Voutilainen and Heikkil~i 1995)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Constraint Grammar (CG)", "sec_num": "1.1" }, { "text": "The text data was input to the morphological analyser. For each unrecognised word, ispell was applied, placing the morphological analyses of the correction proposals as alternative interpretations of the erroneous word (see example 1). EngCG-2 morphological disambiguation was applied to the resulting texts, ruling out the correction proposals with an incompatible POS (cf. example 2). We must note that the broad coverage lexicons of ispell and ENGTWOL are independent. This caused the correspondence between unknown words and ispell's proposals not to be one to one with those of the EngCG-2 morphological analyser, especially in compound words. Such problems were solved considering that a word was correct if it was covered by any of the lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Application of Constraint Grammar", "sec_num": null }, { "text": "3 A recent version of ENGCG, known as EngCG-2, can be tested at http://www.conexor.fi/analysers.html The discrimination of the correct category is unable to distinguish among readings belonging to the same category, so we also applied a wordsense disambiguator based on Wordnet, that had already been tried for nouns on free-running text. In our case it would choose the correction proposal semantically closer to the surrounding context. It has to be noticed that Conceptual Density can only be applied when all the proposals are categorised as nouns, due to the structure of Wordnet. Frequency data was calculated as word-form frequencies obtained from the document where the error was obtained (Document frequency, DF) or from the rest of the documents in the whole Brown Corpus (Brown frequency, BF). The experiments proved that word-forms were better suited for the task, compared to frequencies on lemmas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conceptual Density (CD)", "sec_num": "1.2" }, { "text": "We eliminated proposals beginning with an uppercase character when the erroneous word did not begin with uppercase and there were alternative proposals beginning with lowercase. In example 1, the fourth reading for the misspelling \"bos\" was eliminated, as \"Bose\" would be at an editing distance of two from the misspelling (heuristic HI). This heuristic proved very reliable, and it was used in all experiments. After obtaining the first results, we also noticed that words with less than 4 characters like \"si\", \"teh\", ... (misspellings for \"is\" and \"the\") produced too many proposals, difficult to disambiguate. As they were one of the main error sources for our method, we also evaluated the results excluding them (heuristic H2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other interesting heuristics (HI, H2)", "sec_num": "1.4" }, { "text": "We considered all the possible combinations among the different techniques, e.g. CG+BF, BF+DF, and CG+DF. The weight of the vote can be varied for each technique, e.g. CG could have a weight of 2 and BF a weight of 1 (we will represent this combination as CG2+BF1). This would mean that the BF candidate(s) will only be chosen if CG does not select another option or if CG selects more than one proposal. Several combinations of weights were tried. This simple method to combine the techniques can be improved using optimization algorithms to choose the best weights among fractional values. Nevertheless, we did some trials weighting each technique with its expected precision, and no improvement was observed. As the best combination of techniques and weights for a given set of texts can vary, we separated the error corpora in two, trying all the possibilities on the first half, and testing the best ones on the second half (c.f. section 2.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination of the basic techniques using votes", "sec_num": "1.5" }, { "text": "Based on each kind of knowledge, we built simple guessers and combined them in different ways. In the first phase, we evaluated all the possibilities and selected the best ones on part of the corpus with artificially generated errors. Finally, the best combinations were tested against the texts with genuine spelling errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The experiments", "sec_num": "2" }, { "text": "We chose two different corpora for the experiment. The first one was obtained by systematically generating misspellings from a sample of the Brown Corpus, and the second one was a raw text with genuine errors. While the first one was ideal for experimenting, allowing for automatic verification, the second one offered a realistic setting. As we said before, we are testing language models, so that both kinds of data are appropriate. The corpora with artificial errors, artificial corpora for short, have the following features: a sample was extracted from SemCor (a subset of the Brown Corpus) selecting 150 paragraphs at random. This yielded a seed corpus of 505 sentences and 12659 tokens. To simulate spelling errors, a program named antispell, which applies Damerau's rules at random, was run, giving an average of one spelling error for each 20 words (non-words were left untouched). Antispell was run 8 times on the seed corpus, creating 8 different corpora with the same text but different errors. Nothing was done to prevent two errors in the same sentence, and some paragraphs did not have any error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The error corpora", "sec_num": "2.1" }, { "text": "The corpus of genuine spelling errors, which we also call the \"real\" corpus for short, was magazine text from the Bank of English Corpus, which probably was not previously spell-checked (it contained many misspellings), so it was a good source of errors. Added to the difficulty of obtaining texts with real misspellings, there is the problem of marking the text and selecting the correct proposal for automatic evaluation. As mentioned above, the artificial-error corpora were divided in two subsets. The first one was used for training purposes 4. Both the second half and the \"real\" texts were used for testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The error corpora", "sec_num": "2.1" }, { "text": "The two corpora were passed trough ispell, and for each unknown word, all its correction proposals were inserted. Table 1 shows how, if the misspellings are generated at random, 23.5% of them are real words, and fall out of the scope of this work. Although we did not make a similar counting in the real texts, we observed that a similar percentage can be expected. ", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 121, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data for each corpora", "sec_num": "2.2" }, { "text": "We mainly considered three measures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3" }, { "text": "\u2022 coverage: the number of errors for which the technique yields an answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3" }, { "text": "\u2022 precision: the number of errors with the correct proposal among the selected ones", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "2.3" }, { "text": "\u2022 remaining proposals: the average number of selected proposals. Table 2 shows the results on the training corpora. We omit many combinations that we tried, for the sake of brevity. As a baseline, we show the results when the selection is done at random. Heuristic H1 is applied in all the cases, while tests are performed with and without heuristic H2. If we focus on the errors for which ispell generates more Table 7 . Results on errors with multiple proposals (\"real\" corpus)", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 2", "ref_id": null }, { "start": 412, "end": 419, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "2.3" }, { "text": "than one correction proposal (cf . table 3) , we get a better estimate of the contribution of each guesser. There were 8.26 proposals per word in the general case, and 3.96 when H2 is applied.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 43, "text": ". table 3)", "ref_id": null } ], "eq_spans": [], "section": "Search for the best combinations", "sec_num": "2.3.1" }, { "text": "The results for all the techniques are well above the random baseline. The single best techniques are DF and CG. CG shows good results on precision, but fails to choose a single proposal. H2 raises the precision of all techniques at the cost of losing coverage. CD is the weakest of all techniques, and we did not test it with the other corpora. Regarding the combinations, CGI+DF2+H2 gets the best precision overall, but it only gets 52% coverage, with 1.43 remaining proposals. Nearly 100% coverage is attained by the H2 combinations, with highest precision for CGI+DF2 (83% precision, 1.28 proposals).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search for the best combinations", "sec_num": "2.3.1" }, { "text": "In the second phase, we evaluated the best combinations on another corpus with artificial errors. Tables 4 and 5 show the results, which agree with those obtained in 2.3.1. They show slightly lower percentages but always in parallel.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 112, "text": "Tables 4 and 5", "ref_id": null } ], "eq_spans": [], "section": "Validation of the best combinations", "sec_num": "2.3.2" }, { "text": "As a final step we evaluated the best combinations on the corpus with genuine typing errors. Table 6 shows the overall results obtained, and table 7 the results for errors with multiple proposals. For the latter there were 6.62 proposals per word in the general case (2 less than in the artificial corpus), and 4.84 when heuristic H2 is applied (one more that in the artificial corpus). These tables are further commented in the following section.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 100, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Corpus of genuine errors", "sec_num": "2.3.3" }, { "text": "This section reviews the results obtained. The results for the \"real\" corpus are evaluated first, and the comparison with the other corpora comes later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "Concerning the application of each of the simple techniques separately6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 Any of the guessers performs much better than random.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 DF has a high precision (75%) at the cost of a low coverage (12%). The difference in coverage compared to the artificial error corpora (84%) is mainly due to the smaller size of the documents in the real error corpus (around 50 words per document). For mediumsized documents we expect a coverage similar to that of the artificial error corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 BF offers lower precision (54%) with the gains of a broad coverage (96%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 CG presents 62% precision with nearly 100% coverage, but at the cost of leaving many proposals (2.45)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 The use of CD works only with a small fraction of the errors giving modest results. The fact that it was only applied a few times prevents us from making further conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "Combining the techniques, the results improve:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 The CGI+DF2 combination offers the best results in coverage (100%) and precision (70%) for all tests. As can be seen, CG raises the coverage of the DF method, at the cost of also increasing the number of proposals (1.9) per erroneous word. Had the coverage of DF increased, so would also the number of proposals decrease for this combination, for instance, close to that of the artificial error corpora (1.28).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 The CGI+DFI+BF1 combination provides the same coverage with nearly one interpretation per word, but decreasing precision to a 55%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "\u2022 If full coverage is not necessary, the use of the H2 heuristic raises the precision at least 4% for all combinations. When comparing these results with those of the artificial errors, the precisions in tables 2, 4 and 6 can be misleading. The reason is that the coverage of some techniques varies and the precision varies accordingly. For instance, coverage of DF is around 70% for real errors and 90% for artificial errors, while precisions are 93% and 89% respectively (cf. tables 6 and 2). This increase in precision is not due to the better performance of DF 7, but can be explained because the lower the coverage, the higher the proportion of errors with a single proposal, and therefore the higher the precision. The comparison between tables 3 and 7 is more clarifying. The performance of all techniques drops in table 7. Precision of CG and BF drops 15 and 20 points. DF goes down 20 points in precision and 50 points in coverage. This latter degradation is not surprising, as the length of the documents in this corpus is only of 50 words on average. Had we had access to medium sized documents, we would expect a coverage similar to that of the artificial error corpora. The best combinations hold for the \"real\" texts, as before. The highest precision is for CGI+DF2 (with and without H2). The number of proposals left is higher in the \"real\" texts than in the artificial ones (1.99 to 1.28). It can be explained because DF does not manage to cover all errors, and that leaves many CG proposals untouched. We think that the drop in performance for the \"real\" texts was caused by different factors. First of all, we already mentioned that the size of the documents strongly affected DF. Secondly, the nature of the errors changes: the algorithm to 7 In fact the contrary is deduced from tables 3 and 7. produce spelling errors was biased in favour of frequent words, mostly short ones. We will have to analyse this question further, specially regarding the origin of the natural errors. Lastly, BF was trained on the Brown corpus on American English, while the \"real\" texts come from the Bank of English. Presumably, this could have also affected negatively the performance of these algorithms. Back to table 6, the figures reveal which would be the output of the correction system. Either we get a single proposal 98% of the times (1.02 proposals left on average) with 80% precision for all nonword errors in the text (CGI+DFI+BF1) or we can get a higher precision of 90% with 89% coverage and an average of 1.43 proposals (CGI+DF2+H2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of results", "sec_num": "3" }, { "text": "Comparison with other contextsensitive correction systems There is not much literature about automatic spelling correction with a single proposal. Menezo et al. (1996) present a spelling/grammar checker that adjusts its strategy dynamically taking into account different lexical agents (dictionaries .... ), the user and the kind of text. Although no quantitative results are given, this is in accord with using document and general frequencies. Mays et al. (1991) present the initial success of applying word trigram conditional probabilities to the problem of context based detection and correction of real-word errors. Yarowsky (1994) experiments with the use of decision lists for lexical ambiguity resolution, using context features like local syntactic patterns and collocational information, so that multiple types of evidence are considered in the context of an ambiguous word. In addition to word-forms, the patterns involve POS tags and lemmas. The algorithm is evaluated in missing accent restoration task for Spanish and French text, against a predefined set of a few words giving an accuracy over 99%. Golding and Schabes (1996) propose a hybrid method that combines part-of-speech trigrams and context features in order to detect and correct realword errors. They present an experiment where their system has substantially higher performance than the grammar checker in MS Word, but its coverage is limited to eighteen particular confusion sets composed by two or three similar words (e.g.: weather, whether).", "cite_spans": [ { "start": 147, "end": 167, "text": "Menezo et al. (1996)", "ref_id": "BIBREF9" }, { "start": 446, "end": 464, "text": "Mays et al. (1991)", "ref_id": "BIBREF7" }, { "start": 622, "end": 637, "text": "Yarowsky (1994)", "ref_id": "BIBREF10" }, { "start": 1115, "end": 1141, "text": "Golding and Schabes (1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "The last three systems rely on a previously collected set of confusion sets (sets of similar words or accentuation ambiguities). On the contrary, our system has to choose a single proposal for any possible spelling error, and it is therefore impossible to collect the confusion sets (i.e. sets of proposals for each spelling error) beforehand. We also need to correct as many errors as possible, even if the amount of data for a particular case is scarce.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "This work presents a study of different methods that build on the correction proposals of ispell, aiming at giving a single correction proposal for misspellings. One of the difficult aspects of the problem is that of testing the results. For that reason, we used both a corpus with artificially generated errors for training and testing, and a corpus with genuine errors for testing. Examining the results, we observe that the results improve as more context is taken into account. The word-form frequencies serve as a crude but helpful criterion for choosing the correct proposal. The precision increases as closer contexts, like document frequencies and Constraint Grammar are incorporated. From the results on the corpus of genuine errors we can conclude the following. Firstly, the correct word is among ispell's proposals 100% of the times, which means that all errors can be recovered. Secondly, the expected output from our present system is that it will correct automatically the spelling errors with either 80% precision with full coverage or 90% precision with 89% coverage and leaving an average of 1.43 proposals. Two of the techniques proposed, Brown Frequencies and Conceptual Density, did not yield useful results. CD only works for a very small fraction of the errors, which prevents us from making further conclusions. There are reasons to expect better results in the future. First of all, the corpus with genuine errors contained very short documents, which caused the performance of DF to degrade substantially. Further tests with longer documents should yield better results. Secondly, we collected frequencies from an American English corpus to correct British English texts. Once this language mismatch is solved, better performance should be obtained. Lastly, there is room for improvement in the techniques themselves. We knowingly did not use any model of common misspellings. Although we expect limited improvement, stronger methods to combine the techniques can also be tried. Continuing with our goal of attaining a single proposal as reliably as possible, we will focus on short words and we plan to also include more syntactic and semantic context in the process by means of collocational information. This step opens different questions about the size of the corpora needed for accessing the data and the space needed to store the information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "In fact, there is no training in the statistical sense. It just involves choosing the best alternatives for voting. 5 As we focused on non-word words, there is not a count of real-word errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If not explicitly noted, the figures and comments refer to the \"real\" corpus, table 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by the Basque Government, the University of the Basque Country and the CICYT (Comisirn Interministerial de Ciencia y Tecnologfa).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word sense disambiguation using conceptual density", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1996, "venue": "Proc. of COLING-96", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre E. and Rigau G. (1996) Word sense disambiguation using conceptual density. In Proc. of COLING-96, Copenhagen, Denmark.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Combining trigrambased and feature-based methods for context-sensitive spelling correction", "authors": [ { "first": "A", "middle": [], "last": "Golding", "suffix": "" }, { "first": "", "middle": [ "Y" ], "last": "Schabes", "suffix": "" } ], "year": 1996, "venue": "Proc. of the 34th ACL Meeting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Golding A. and Schabes. Y. (1996) Combining trigram- based and feature-based methods for context-sensitive spelling correction. In Proc. of the 34th ACL Meeting, Santa Cruz, CA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Computing Analysis of Present-Day American English", "authors": [ { "first": "S", "middle": [], "last": "Francis", "suffix": "" }, { "first": "H", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1967, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis S.and Kucera H. (1967) Computing Analysis of Present-Day American English. Brown Univ. Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Constraint Grammar: a Language Independent System for Parsing Unrestricted Text", "authors": [ { "first": "F", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "A", "middle": [], "last": "Voutilainen", "suffix": "" }, { "first": "J", "middle": [], "last": "Heikkil~i", "suffix": "" }, { "first": "A", "middle": [], "last": "Anttila", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karlsson F., Voutilainen A., Heikkil~i J. and Anttila A. (1995) Constraint Grammar: a Language Independent System for Parsing Unrestricted Text. Ed.Mouton de Gruyter.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Two-level Morphology: A general Computational Model for Word-Form Recognition and Production", "authors": [ { "first": "K", "middle": [], "last": "Koskenniemi", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koskenniemi K. (1983) Two-level Morphology: A general Computational Model for Word-Form Recognition and Production. University of Helsinki.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Techniques for automatically correcting words in text", "authors": [ { "first": "K", "middle": [], "last": "Kukich", "suffix": "" } ], "year": 1992, "venue": "ACM Computing Surveys", "volume": "24", "issue": "", "pages": "377--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kukich K. (1992) Techniques for automatically correcting words in text. In ACM Computing Surveys, Vol. 24, N. 4, December, pp. 377-439.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Context based spelling correction", "authors": [ { "first": "E", "middle": [], "last": "Mays", "suffix": "" }, { "first": "F", "middle": [], "last": "Damerau", "suffix": "" }, { "first": "", "middle": [ "R" ], "last": "Mercer", "suffix": "" } ], "year": 1991, "venue": "Information Processing & Management", "volume": "27", "issue": "", "pages": "517--522", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mays E., Damerau F. and Mercer. R. (1991) Context based spelling correction. Information Processing & Management, Vol. 27, N. 5, pp. 517-522.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Five papers on WordNet. Special Issue of the Int", "authors": [ { "first": "G", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "Journal of Lexicography", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller G. (1990) Five papers on WordNet. Special Issue of the Int. Journal of Lexicography, Vol. 3, N. 4.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Reconnaisances pluri-lexicales dans CELINE, un syst~me multi-agents de drtection et correction des erreurs", "authors": [ { "first": "J", "middle": [], "last": "Menezo", "suffix": "" }, { "first": "D", "middle": [], "last": "Genthial", "suffix": "" }, { "first": "J", "middle": [], "last": "Courtin", "suffix": "" } ], "year": 1996, "venue": "NLP + IA", "volume": "96", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Menezo J., Genthial D. and Courtin J. (1996) Reconnaisances pluri-lexicales dans CELINE, un syst~me multi-agents de drtection et correction des erreurs. NLP + IA 96, Moncton, N. B., Canada.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Decision lists for lexical ambiguity resolution", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 32nd ACL Meeting", "volume": "", "issue": "", "pages": "88--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky D. (1994) Decision lists for lexical ambiguity resolution. In Proceedings of the 32nd ACL Meeting, Las Cruces, NM, pp.88-95.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "content": "
l~'half 2 ~ half\"real\"
words47584 4758439732
~rrors17721811
aon real-word errors13541403 365
ispell proposals72428083 1257
~vords with multiple proposals810852 15~
Long word errors (H2)96898C 33
proposals for long words (H2)22452313 80~
long word errors (H2) with430425 124
multiple proposals
", "html": null, "text": "For the texts with genuine errors, the method used in the selection of the misspellings was the following: after applying ispell, no correction was found for 150 words (mainly proper nouns and foreign words), and there were about 300 which This left 369 erroneous word-forms. After examining them we found that the correct word-form was among ispell's proposals, with very few exceptions. Regarding the selection among the different alternatives for an erroneous word-form, we can see that around half of them has a single proposal. This gives a measure of the work to be done. For example, in the real error", "type_str": "table" }, "TABREF2": { "num": null, "content": "
Basic techniques
random baseline100.0069.921.00
random+H289.7075.471.00
CG99.1984.151.61
CG+H289.4390.301.57
DF70.1993.051.02
DF+H261.5297.801.00
BF98.3780.991.00
BF+H288.0885.541.00
Combinations
CG 1 +DF2100.0087.261.42
CG 1 +DF2+H289.7090.941.43
CGI+DFI+BF1100.0080.761.02
CGI+DFI+BFI+H289.7084.891.02
Table 6. Best combinations\"real\" corpus)
Cover. % I Prec. %[ #prop
Basic techniques
random baseline100.0029.751.00
random+H276.5434.521.00
CG98. I062.582.45
CG+H275.9373.982.52
DF30.3862.501.13
DF+H212.3575.001.05
BF96.2054.611.00
BF+H272.8460.171.00
Combinations
CG 1 +DF2100.0070.251.99
CGI+DF2+H276.2475.812.15
CGI+DFI+BF1100.0055.061.04
CGI+DFI+BFI+H276.5459.681.05
", "html": null, "text": "Cover. % I Prec. % [#prop.", "type_str": "table" } } } }