{ "paper_id": "D08-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:30:17.948601Z" }, "title": "A Discriminative Candidate Generator for String Transformations", "authors": [ { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tokyo", "location": { "addrLine": "7-3-1 Hongo, Bunkyo-ku Tokyo 113-8656", "country": "Japan" } }, "email": "okazaki@is.s.u-tokyo.ac.jp" }, { "first": "Sophia", "middle": [], "last": "Ananiadou", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Manchester National Centre for Text Mining (NaCTeM)", "location": { "addrLine": "Manchester Interdisciplinary Biocentre 131 Princess Street", "postCode": "M1 7DN", "settlement": "Manchester", "country": "UK" } }, "email": "sophia.ananiadou@manchester.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "String transformation, which maps a source string s into its desirable form t * , is related to various applications including stemming, lemmatization, and spelling correction. The essential and important step for string transformation is to generate candidates to which the given string s is likely to be transformed. This paper presents a discriminative approach for generating candidate strings. We use substring substitution rules as features and score them using an L 1-regularized logistic regression model. We also propose a procedure to generate negative instances that affect the decision boundary of the model. The advantage of this approach is that candidate strings can be enumerated by an efficient algorithm because the processes of string transformation are tractable in the model. We demonstrate the remarkable performance of the proposed method in normalizing inflected words and spelling variations.", "pdf_parse": { "paper_id": "D08-1047", "_pdf_hash": "", "abstract": [ { "text": "String transformation, which maps a source string s into its desirable form t * , is related to various applications including stemming, lemmatization, and spelling correction. The essential and important step for string transformation is to generate candidates to which the given string s is likely to be transformed. This paper presents a discriminative approach for generating candidate strings. We use substring substitution rules as features and score them using an L 1-regularized logistic regression model. We also propose a procedure to generate negative instances that affect the decision boundary of the model. The advantage of this approach is that candidate strings can be enumerated by an efficient algorithm because the processes of string transformation are tractable in the model. We demonstrate the remarkable performance of the proposed method in normalizing inflected words and spelling variations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "String transformation maps a source string s into its destination string t * . In the broad sense, string transformation can include labeling tasks such as partof-speech tagging and shallow parsing (Brill, 1995) . However, this study addresses string transformation in its narrow sense, in which a part of a source string is rewritten with a substring. Typical applications of this task include stemming, lemmatization, spelling correction (Brill and Moore, 2000; Wilbur et al., 2006; Carlson and Fette, 2007) , OCR error correction (Kolak and Resnik, 2002) , approximate string matching (Navarro, 2001) , and duplicate record detection (Bilenko and Mooney, 2003) .", "cite_spans": [ { "start": 198, "end": 211, "text": "(Brill, 1995)", "ref_id": "BIBREF9" }, { "start": 440, "end": 463, "text": "(Brill and Moore, 2000;", "ref_id": "BIBREF8" }, { "start": 464, "end": 484, "text": "Wilbur et al., 2006;", "ref_id": "BIBREF27" }, { "start": 485, "end": 509, "text": "Carlson and Fette, 2007)", "ref_id": "BIBREF10" }, { "start": 533, "end": 557, "text": "(Kolak and Resnik, 2002)", "ref_id": "BIBREF13" }, { "start": 588, "end": 603, "text": "(Navarro, 2001)", "ref_id": "BIBREF21" }, { "start": 637, "end": 663, "text": "(Bilenko and Mooney, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies have formalized the task in the discriminative framework (Ahmad and Kondrak, 2005; Li et al., 2006; Chen et al., 2007) , t * = argmax t\u2208gen(s) P (t|s).", "cite_spans": [ { "start": 72, "end": 97, "text": "(Ahmad and Kondrak, 2005;", "ref_id": "BIBREF1" }, { "start": 98, "end": 114, "text": "Li et al., 2006;", "ref_id": "BIBREF17" }, { "start": 115, "end": 133, "text": "Chen et al., 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, the candidate generator gen(s) enumerates candidates of destination (correct) strings, and the scorer P (t|s) denotes the conditional probability of the string t for the given s. The scorer was modeled by a noisy-channel model (Shannon, 1948; Brill and Moore, 2000; Ahmad and Kondrak, 2005) and maximum entropy framework (Berger et al., 1996; Li et al., 2006; Chen et al., 2007) . The candidate generator gen(s) also affects the accuracy of the string transformation. Previous studies of spelling correction mostly defined gen(s), gen(s) = {t | dist(s, t) < \u03b4}.", "cite_spans": [ { "start": 233, "end": 248, "text": "(Shannon, 1948;", "ref_id": "BIBREF24" }, { "start": 249, "end": 271, "text": "Brill and Moore, 2000;", "ref_id": "BIBREF8" }, { "start": 272, "end": 296, "text": "Ahmad and Kondrak, 2005)", "ref_id": "BIBREF1" }, { "start": 327, "end": 348, "text": "(Berger et al., 1996;", "ref_id": "BIBREF4" }, { "start": 349, "end": 365, "text": "Li et al., 2006;", "ref_id": "BIBREF17" }, { "start": 366, "end": 384, "text": "Chen et al., 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "( 2)Here, the function dist(s, t) denotes the weighted Levenshtein distance (Levenshtein, 1966) between strings s and t. Furthermore, the threshold \u03b4 requires the distance between the source string s and a candidate string t to be less than \u03b4.", "cite_spans": [ { "start": 76, "end": 95, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The choice of dist(s, t) and \u03b4 involves a tradeoff between the precision, recall, and training/tagging speed of the scorer. A less restrictive design of these factors broadens the search space, but it also increases the number of confusing candidates, amount of feature space, and computational cost for the scorer. Moreover, the choice is highly dependent on the target task. It might be sufficient for a spelling correction program to gather candidates from known words, but a stemmer must handle unseen words appropriately. The number of candidates can be huge when we consider transformations from and to unseen strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper addresses these challenges by exploring the discriminative training of candidate generators. More specifically, we build a binary classifier that, when given a source string s, decides whether a candidate t should be included in the candidate set or not. This approach appears straightforward, but it must resolve two practical issues. First, the task of the classifier is not only to make a binary decision for the two strings s and t, but also to enumerate a set of positive strings for the string s,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "gen(s) = {t | predict(s, t) = 1}.", "eq_num": "(3)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "In other words, an efficient algorithm is necessary to find a set of strings with which the classifier predict(s, t) yields positive labels for the string s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another issue arises when we prepare a training set. A discriminative model requires a training set in which each instance (pair of strings) is annotated with a positive or negative label. Even though some existing resources (e.g., inflection table and query log) are available for positive instances, such resources rarely contain negative instances. Therefore, we must generate negative instances that are effective for discriminative training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the first issue, we design features that express transformations from a source string s to its destination string t. Feature selection and weighting are performed using an L 1 -regularized logistic regression model, which can find a sparse solution to the classification model. We also present an algorithm that utilizes the feature weights to enumerate candidates of destination strings efficiently. We deal with the second issue by generating negative instances from unlabeled instances. We describe a procedure to choose negative instances that affect the decision boundary of the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows. Section 2 formalizes the task of the candidate generator as a binary classification modeled by logistic regression. Features for the classifier are designed using the rules of substring substitution. Therefore, we can obtain, efficiently, candidates of destination strings and negative instances for training. Section 3 reports the remarkable performance of the proposed method in various applications including lemmatization, spelling normalization, and noun derivation. We briefly review previous work in Section 4, and conclude this paper in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Candidate generator", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this section, we first introduce a binary classifier that yields a label y \u2208 {0, 1} indicating whether a candidate t should be included in the candidate set (1) or not (0), given a source string s. We express the conditional probability P (y|s, t) using a logistic regression model,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (1|s, t) = 1 1 + exp (\u2212\u039b T F (s, t)) ,", "eq_num": "(4)" } ], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "P (0|s, t) = 1 \u2212 P (1|s, t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "In these equations, F = {f 1 , ..., f K } denotes a vector of the Boolean feature functions; K is the number of feature functions; and \u039b = {\u03bb 1 , ..., \u03bb K } presents a weight vector of the feature functions. We obtain the following decision rule to choose the most probable label y * for a given pair s, t ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y * = argmax y\u2208{0,1} P (y|s, t) = 1 \u039b T F (s, t) > 0 0 (otherwise) .", "eq_num": "(6)" } ], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "Finally, given a source string s, the generator function gen(s) is defined to collect all strings to which the classifier assigns positive labels:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "gen(s) = {t | P (1|s, t) > P (0|s, t)} = {t | \u039b T F (s, t) > 0}.", "eq_num": "(7)" } ], "section": "Candidate classification model", "sec_num": "2.1" }, { "text": "The binary classifier can include any arbitrary feature. This is exemplified by the Levenshtein distance and distributional similarity (Lee, 1999) between two strings s and t. These features can improve the classification accuracy, but it is unrealistic to compute these features for every possible string, as in equation 7. For that reason, we specifically examine substitution rules, with which the process ('o', ''), ('^o', '^'), ('oe', 'e'), ('^oe', '^e'), ('^oes', '^es'), ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution rules as features", "sec_num": "2.2" }, { "text": "('a', ''), ('na', 'n'), ('ae', 'e'), ('ana', 'an'), ('nae', 'ne'), ('aem', 'em'), ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution rules as features", "sec_num": "2.2" }, { "text": "('ies', 'y'), ('dies', 'dy'), ('ies$', 'y$'), ('udies', 'udy'), ('dies$', 'dy$'), ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitution rules as features", "sec_num": "2.2" }, { "text": "t:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "S: t: S: t:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "(3) Figure 1 : Generating substitution rules.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "of transforming a source string s into its destination form t is tractable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "In this study, we assume that every string has a prefix '\u02c6' and postfix '$', which indicate the head and tail of a string. A substitution rule r = (\u03b1, \u03b2) replaces every occurrence of the substring \u03b1 in a source string into the substring \u03b2. Assuming that a string s can be transformed into another string t with a single substitution operation, substitution rules express the different portion between strings s and t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "Equation 8 defines a binary feature function with a substitution rule between two strings s and t,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f k (s, t) = 1 (rule r k can convert s into t) 0 (otherwise) .", "eq_num": "(8)" } ], "section": "S:", "sec_num": null }, { "text": "We allow multiple substitution rules for a given pair of strings. For instance, substitution rules ('a', ''), ('na', 'n'), ('ae', 'e'), ('nae', 'ne'), etc. form feature functions that yield 1 for strings s = '\u02c6anaemia$' and t = '\u02c6anemia$'. Equation 6 produces a decision based on the sum of feature weights, or scores of substitution rules, representing the different portions between s and t. Substitution rules for the given two strings s and t are obtained as follows. Let l denote the longest common prefix between strings s and t, and r the longest common postfix. We define c s as the substring in s that is not covered by the longest common prefix l and postfix r, and define c t for t analogously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "In other words, strings s and t are divided into three regions, lc s r and lc t r, respectively. For strings s = '\u02c6anaemia$' and t = '\u02c6anemia$' in Figure 1 (2), we obtain c s = 'a' and c t = '' because l = '\u02c6an' and r = 'emia$'.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "Because substrings c s and c t express different portions between strings s and t, we obtain the mini-mum substitution rule (c s , c t ), which can convert the string s into t by replacing substrings c s in s with c t ; the minimum substitution rule for the same example is ('a', ''). However, replacing letters 'a' in '\u02c6anaemia$' into empty letters does not produce the correct string '\u02c6anemia$' but '\u02c6nemi$'. Furthermore, the rule might be inappropriate for expressing string transformation because it always removes the letter 'a' from every string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "Therefore, we also obtain expanded substitution rules, which insert postfixes of l to the head of minimum substitution rules, and/or append prefixes of r to the rules. For example, we find an expanded substitution rule ('na', 'n'), by inserting a postfix of l = '\u02c6an' to the head of the minimum substitution rule ('a', ''); similarly, we obtain an expanded substitution rule ('ae', 'e'), by appending a prefix of r = 'emia$' to the tail of the rule ('a', '').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "Figure 1 displays examples of substitution rules (the right side) for three pairs of strings (the left side). Letters in blue, green, and red respectively represent the longest common prefixes, longest common postfixes, and different portions. In this study, we expand substitution rules such that the number of letters in rules is does not pass a threshold \u03b8 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "S:", "sec_num": null }, { "text": "Given a training set that consists of N instances, (N ) , y (N ) ) , we optimize the feature weights in the logistic regression model by maximizing the log-likelihood of the conditional probability distribution,", "cite_spans": [ { "start": 51, "end": 55, "text": "(N )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "D = (s (1) , t (1) , y (1) ), ..., (s (N ) , t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u039b = N i=1 log P (y (i) |s (i) , t (i) ).", "eq_num": "(9)" } ], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "The partial derivative of the log-likelihood with respect to a feature weight \u03bb k is given as equation 10,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L \u039b \u2202\u03bb k = N i=1 y (i) \u2212 P (1|s (i) , t (i) ) f k (s (i) , t (i) ).", "eq_num": "(10)" } ], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "The maximum likelihood estimation (MLE) is known to suffer from overfitting the training set. The common approach for addressing this issue is to use the maximum a posteriori (MAP) estimation, introducing a regularization term of the feature weights \u039b, i.e., a penalty on large feature weights. In addition, the generation algorithm of substitution rules might produce inappropriate rules that transform a string incorrectly, or overly specific rules that are used scarcely. Removing unnecessary substitution rules not only speeds up the classifier but also the algorithm for candidate generation, as presented in Section 2.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "In recent years, L 1 regularization has received increasing attention because it produces a sparse solution of feature weights in which numerous feature weights are zero (Tibshirani, 1996; Ng, 2004) . Therefore, we regularize the log-likelihood with the L 1 norm of the weight vector \u039b and define the final form the objective function to be minimized as", "cite_spans": [ { "start": 170, "end": 188, "text": "(Tibshirani, 1996;", "ref_id": "BIBREF25" }, { "start": 189, "end": 198, "text": "Ng, 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E \u039b = \u2212L \u039b + |\u039b| \u03c3 .", "eq_num": "(11)" } ], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "Here, \u03c3 is a parameter to control the effect of L 1 regularization; the smaller the value we set to \u03c3, the more features the MAP estimation assigns zero weights to: it removes a number of features from the model. Equation 11 is minimized using the Orthant-Wise Limited-memory Quasi-Newton (OW-LQN) method (Andrew and Gao, 2007) because the second term of equation 11 is not differentiable at \u03bb k = 0.", "cite_spans": [ { "start": 305, "end": 327, "text": "(Andrew and Gao, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation", "sec_num": "2.3" }, { "text": "The advantage of our feature design is that we can enumerate strings to which the classifier is likely to assign positive labels. We start by observing the necessary condition for t in equation 7,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u039b T F (s, t) > 0 \u21d2 \u2203k : f k (s, t) = 1 \u2227 \u03bb k > 0.", "eq_num": "(12)" } ], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "The classifier might assign a positive label to strings s and t when at least one feature function whose weight is positive can transform s to t. Let R + be a set of substitution rules to which MAP estimation has assigned positive feature weights. Because each feature corresponds to a substitution rule, we can obtain gen(s) for a given string s by application of every substitution rule r \u2208 R + ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "gen(s) = {r(s) | r \u2208 R + \u2227 \u039b T F (s, r(s)) > 0}.", "eq_num": "(13)" } ], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "Input: s = (s 1 , ..., s l ): an input string s (series of letters) Input: D: a trie dictionary containing positive features Output: T : gen(s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "T = {}; 1 U = {}; 2 foreach i \u2208 (1, ..., |s|) do 3 F \u2190 D.prefix search(s, i); 4 foreach f \u2208 F do 5 if f / \u2208 U then 6 t \u2190 f .apply(s); 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "if classify(s, t) = 1 then 8 add t to T ; 9 end 10 add f to U ; 11 end 12 end 13 end 14 return T ; 15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "Algorithm 1: A pseudo-code for gen(s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "Here, r(s) presents the string to which the substitution rule r transforms the source string s. We can compute gen(s) with a small computational cost if the MAP estimation with L 1 regularization reduces the number of active features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "Algorithm 1 represents a pseudo-code for obtaining gen(s). To search for positive substitution rules efficiently, the code stores a set of rules in a trie structure. In line 4, the code obtains a set of positive substitution rules F that can rewrite substrings starting at offset #i in the source string s. For each rule f \u2208 F , we obtain a candidate string t by application of the substitution rule f to the source string s (line 7). The candidate string t is qualified to be included in gen(s) when the classifier assigns a positive label to strings s and t (lines 8 and 9). Lines 6 and 11 prevent the algorithm from repeating evaluation of the same substitution rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate generation", "sec_num": "2.4" }, { "text": "The parameter estimation requires a training set D in which each instance (pair of strings) is annotated with a positive or negative label. Negative instances (counter examples) are essential for penalizing inappropriate substitution rules, e.g. ('a', ''). Even though some existing resources (e.g. verb inflection table) are available for positive instances, such resources rarely contain negative instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "A common approach for handling this situation is to assume that every pair of strings in a resource Algorithm 2: Generating negative instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "Input: D + = [(s 1 , t 1 ), ..., (s l , t l )]:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "is a negative instance; however, negative instances amount to ca. V (V \u2212 1)/2, where V represents the total number of strings. Moreover, substitution rules expressing negative instances are innumerable and sparse because the different portions are peculiar to individual negative instances. For instance, the minimum substitution rule for unrelated words anaemia and around is ('naemia', 'round'), but the rule cannot be too specific to generalize the conditions for other negative instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "In this study, we generate negative instances so that they can penalize inappropriate rules and settle the decision boundary of the classifier. This strategy is summarized as follows. We consider every pair of strings as candidates for negative instances. We obtain substitution rules for the pair using the same algorithm as that described in Section 2.2 if a string pair is not included in the dictionary (i.e., not in positive instances). The pair is used as a negative instance only when any substitution rule generated from the pair also exists in the substitution rules generated from positive instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "Algorithm 2 presents the pseudo-code that implements the strategy for generating negative instances efficiently. First, we presume that we have positive instances D + = [(s 1 , t 1 ) , ..., (s l , t l )] and unlabeled strings V . For example, positive instance D + represent orthographic variants, and unlabeled strings V include all possible words (vocabulary). We insert the vocabulary into a suffix array, which is used to locate every occurrence of substrings in V . The algorithm first generates substitution rules R only from positive instances D + (lines 3 to 7). For each substitution rule r \u2208 R, we enumerate known strings S that contain the source substring r.src (line 9). We apply the substitution rule to each string s \u2208 S and obtain its destination string t (line 11). If the pair of strings s, t is not included in D + (line 12), and if the destination string t is known (line 13), the substitution rule r might associate incorrect strings s and t, which do not exist in D + . Therefore, we insert the pair to the negative set D \u2212 (line 14).", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 182, "text": "= [(s 1 , t 1 )", "ref_id": null } ], "eq_spans": [], "section": "Generating negative instances", "sec_num": "2.5" }, { "text": "We evaluated the candidate generator using three different tasks: normalization of orthographic variants, noun derivation, and lemmatization. The datasets for these tasks were obtained from the UMLS SPECIALIST Lexicon 2 , a large lexicon that includes both commonly occurring English words and biomedical vocabulary. Table 1 displays the list of tables in the SPECIALIST Lexicon that were used in our experiments. We prepared three datasets, Orthography, Derivation, and Inflection.", "cite_spans": [], "ref_spans": [ { "start": 317, "end": 324, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3.1" }, { "text": "The Orthography dataset includes spelling variants (e.g., color and colour) in the LRSPL table. We chose entries as positive instances in which spelling variants are caused by (case-insensitive) alphanumeric changes 3 . The Derivation dataset was built directly from the LRNOM table, which includes noun derivations such as abandon \u2192 abandonment. The LRAGR table includes base forms and their inflectional variants of nouns (singular and plural forms), verbs (infinitive, third singular, past, past participle forms, etc), and adjectives/adverbs (positive, comparative, and superlative forms). For the Inflection dataset, we extracted the entries in which inflectional forms differ from their base forms 4 , e.g., study \u2192 studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3.1" }, { "text": "For each dataset, we applied the algorithm described in Section 2.5 to generate substitution rules and negative instances. Table 2 shows the number of positive instances (# +), negative instances (# -), and substitution rules (# Rules). We evaluated the performance of the proposed method in two different goals of the tasks: classification (Section 3.2) and normalization (Section 3.3).", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3.1" }, { "text": "In this experiment, we measured the performance of the classification task in which pairs of strings were assigned with positive or negative labels. We trained and evaluated the proposed method by performing ten-fold cross validation on each dataset 5 . Eight baseline systems were prepared for comparison: Levenshtein distance (LD), normalized Levenshtein distance (NLD), Dice coefficient on letter bigrams (DICE) (Adamson and Boreham, 1974) , Longest Common Substring Ratio (LCSR) (Melamed, 1999) , Longest Common Prefix Ratio (PREFIX) (Kondrak, 2005 ), Porter's stemmer (Porter, 1980) , Morpha (Minnen et al., 2001) , and CST's lemmatiser LRSPL table includes trivial spelling variants that can be handled using simple character/string operations. For example, the table contains spelling variants related to case sensitivity (e.g., deg and Deg) and symbols (e.g., Feb and Feb.).", "cite_spans": [ { "start": 415, "end": 442, "text": "(Adamson and Boreham, 1974)", "ref_id": "BIBREF0" }, { "start": 483, "end": 498, "text": "(Melamed, 1999)", "ref_id": "BIBREF19" }, { "start": 538, "end": 552, "text": "(Kondrak, 2005", "ref_id": "BIBREF14" }, { "start": 573, "end": 587, "text": "(Porter, 1980)", "ref_id": "BIBREF23" }, { "start": 597, "end": 618, "text": "(Minnen et al., 2001)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "4 LRAGR table also provides agreement information even when word forms do not change. For example, the table contains an entry indicating that the first-singular present form of the verb study is study, which might be readily apparent to English speakers. 5 We determined the regularization parameter \u03c3 = 5 experimentally. Refer to Figure 2 for the performance change. jan, 2006) 6 .", "cite_spans": [ { "start": 256, "end": 257, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 332, "end": 340, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "The five systems LD, NLD, DICE, LCSR, and PREFIX employ corresponding metrics of string distance or similarity. Each system assigns a positive label to a given pair of strings s, t if the distance/similarity of strings s and t is smaller/larger than the threshold \u03b4 (refer to equation 2 for distance metrics). The threshold of each system was chosen so that the system achieves the best F1 score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "The remaining three systems assign a positive label only if the system transforms the strings s and t into the identical string. For example, a pair of two words studies and study is classified as positive by Porter's stemmer, which yields the identical stem studi for these words. We trained CST's lemmatiser for each dataset to obtain flex patterns that are used for normalizing word inflections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "To examine the performance of the L 1regularized logistic regression as a discriminative model, we also built two classifiers based on the Support Vector Machine (SVM). These SVM classifiers were implemented by the SVM perf 7 on a linear kernel 8 . An SVM classifier employs the same feature set (substitution rules) as the proposed method so that we can directly compare the L 1regularized logistic regression and the linear-kernel SVM. Another SVM classifier incorporates the five string metrics; this system can be considered as our reproduction of the discriminative string similarity proposed by Bergsma and Kondrak (2007) . Table 3 reports the precision (P), recall (R), and F1 score (F1) based on the number of correct decisions for positive instances. The proposed method outperformed the baseline systems, achieving 0.919, 0.888, and 0.984 of F1 scores, respectively. Porter's stemmer worked on the Inflection set, but not on the Orthography set, which is beyond the scope of the stemming algorithms. CST's lemmatizer suffered from low recall on the Inflection set because it removed suffixes of base forms, e.g., (cloning, clone) \u2192 (clone, clo). Morpha and CST's lemma- (Porter, 1980) . 084 .074 .079 .197 .846 .320 .926 .839 .881 Morpha (Minnen et al., 2001) .009 .007 .008 .012 .022 .016 .979 .836 .902 CST's lemmatiser (Dalianis et al. 2006) tizer were not designed for orthographic variants and noun derivations. Levenshtein distance (\u03b4 = 1) did not work for the Derivation set because noun derivations often append two or more letters (e.g., happy \u2192 happiness). No string similarity/distance metrics yielded satisfactory results. Some metrics obtained the best F1 scores with extreme thresholds only to classify every instance as positive. These results imply the difficulty of the string metrics for the tasks.", "cite_spans": [ { "start": 601, "end": 627, "text": "Bergsma and Kondrak (2007)", "ref_id": null }, { "start": 1180, "end": 1194, "text": "(Porter, 1980)", "ref_id": "BIBREF23" }, { "start": 1197, "end": 1269, "text": "084 .074 .079 .197 .846 .320 .926 .839 .881 Morpha (Minnen et al., 2001)", "ref_id": null }, { "start": 1332, "end": 1354, "text": "(Dalianis et al. 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 630, "end": 637, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "The L 1 -regularized logistic regression was comparable to the SVM with linear kernel in this experiment. However, the presented model presents the advantage that it can reduce the number of active features (features with non-zero weights assigned); the L 1 regularization can remove 74%, 48%, and 82% of substitution rules in each dataset. The performance improvements by incorporating string metrics as features were very subtle (less than 0.7%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "What is worse, the distance/similarity metrics do not specifically derive destination strings to which the classifier is likely to assign positive labels. Therefore, we can no longer use the efficient algorithm as a candidate generator (in Section 2.4) with these features. Table 4 demonstrates the ability of our approach to obtain effective features; the table shows the top 10 features with high weights assigned for the Orthography data. An interesting aspect of the proposed method is that the process of the orthographic variants is interpretable through the feature weights. Figure 2 shows plots of the F1 scores (y-axis) for the Inflection data when we change the number of active features (x-axis) by controlling the regularization parameter \u03c3 from 0.001 to 100. The larger the value we set for \u03c3, the better the classifier performs, generally, with more active features. In extreme cases, the number of active features drops to 97 with \u03c3 = 0.01; nonetheless, the classifier still achieves 0.961 of the F1 score. The result suggests that a small set of substitution rules can accommodate most cases of inflectional variations.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 582, "end": 590, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experiment 1: Candidate classification", "sec_num": "3.2" }, { "text": "The second experiment examined the performance of the string normalization tasks formalized in equation 1. In this task, a system was given a string s and was required to yield either its transformed form t * (s = t * ) or the string s itself when the transformation is unnecessary for s. The conditional probability distribution (scorer) in equation 1 was modeled System Orthography Derivation Inflection XTAG morph 1. 5 P R F1 P R F1 P R F1 P R F1 Morpha . 078 .012 .021 .233 .016 .029 .435 .682 .531 .830 .587 .688 CST's lemmatiser .135 .160 .146 .378 .732 .499 .367 .762 .495 .584 .589 .587 Proposed method .859 .823 .841 .979 .981 .980 .973 .979 .976 .837 .816 .827 by the maximum entropy framework. Features for the maximum entropy model consist of: substitution rules between strings s and t, letter bigrams and trigrams in s, and letter bigrams and trigrams in t. We prepared four datasets, Orthography, Derivation, Inflection, and XTAG morphology. Each dataset is a list of string pairs s, t that indicate the transformation of the string s into t. A source string s is identical to its destination string t when string s should not be changed. These instances correspond to the case where string s has already been lemmatized. For each string pair (s, t) in LR-SPL 9 , LRNOM, and LRAGR tables, we generated two instances s, t and t, t . Consequently, a system is expected to leave the string t unchanged. We also used XTAG morphology 10 to perform a crossdomain evaluation of the lemmatizer trained on the Inflection dataset 11 . The entries in XTAG morphol- 9 We define that s precedes t in dictionary order. 10 XTAG morphology database 1.5: ftp://ftp.cis.upenn.edu/pub/xtag/morph-1. 5/morph-1.5.tar.gz 11 We found that XTAG morphology contains numerous in-ogy that also appear in the Inflection dataset were 39,130 out of 317,322 (12.3 %). We evaluated the proposed method and CST's lemmatizer by performing ten-fold cross validation. Table 5 reports the performance based on the number of correct transformations. The proposed method again outperformed the baseline systems with a wide margin. It is noteworthy that the proposed method can accommodate morphological inflections in the XTAG morphology corpus with no manual tuning or adaptation.", "cite_spans": [ { "start": 472, "end": 683, "text": "078 .012 .021 .233 .016 .029 .435 .682 .531 .830 .587 .688 CST's lemmatiser .135 .160 .146 .378 .732 .499 .367 .762 .495 .584 .589 .587 Proposed method .859 .823 .841 .979 .981 .980 .973 .979 .976 .837 .816 .827", "ref_id": null }, { "start": 1582, "end": 1583, "text": "9", "ref_id": null }, { "start": 1727, "end": 1729, "text": "11", "ref_id": null } ], "ref_spans": [ { "start": 420, "end": 469, "text": "5 P R F1 P R F1 P R F1 P R F1 Morpha", "ref_id": "TABREF3" }, { "start": 1960, "end": 1967, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Experiment 2: String transformation", "sec_num": "3.3" }, { "text": "Although we introduced no assumptions about target tasks (e.g. a known vocabulary), the average number of positive substitution rules relevant to source strings was as small as 23.9 (in XTAG morphology data). Therefore, the candidate generator performed 23.9 substitution operations for a given string. It applied the decision rules (equation 7) 21.3 times, and generated 1.67 candidate strings per source string. The experimental results described herein demonstrated that the candidate generator was modeled successfully by the discriminative framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: String transformation", "sec_num": "3.3" }, { "text": "The task of string transformation has a long history in natural language processing and information retrieval. As described in Section 1, this task is related closely to various applications. Therefore, we specifically examine several prior studies that are relevant to this paper in terms of technical aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4" }, { "text": "Some researchers have reported the effectiveness of the discriminative framework of string similarity. MaCallum et al. (2005) proposed a method to train the costs of edit operations using Conditional Random Fields (CRFs). Bergsma and Kondrak (2007) correct comparative and superlative adjectives, e.g., unpopular \u2192 unpopularer \u2192 unpopularest and refundable \u2192 refundabler \u2192 refundablest. Therefore, we removed inflection entries for comparative and superlative adjectives from the dataset.", "cite_spans": [ { "start": 103, "end": 125, "text": "MaCallum et al. (2005)", "ref_id": null }, { "start": 222, "end": 248, "text": "Bergsma and Kondrak (2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4" }, { "text": "presented an alignment-based discriminative string similarity. They extracted features from substring pairs that are consistent to a character-based alignment of two strings. Aramaki et al. (2008) also used features that express the different segments of the two strings. However, these studies are not suited for a candidate generator because the processes of string transformations are intractable in their discriminative models. Dalianis and Jongejan (2006) presented a lemmatiser based on suffix rules. Although they proposed a method to obtain suffix rules from a training data, the method did not use counter-examples (negatives) for reducing incorrect string transformations. Tsuruoka et al. (2008) proposed a scoring method for discovering a list of normalization rules for dictionary look-ups. However, their objective was to transform given strings, so that strings (e.g., studies and study) referring to the same concept in the dictionary are mapped into the same string (e.g., stud); in contrast, this study maps strings into their destination strings that were specified by the training data.", "cite_spans": [ { "start": 175, "end": 196, "text": "Aramaki et al. (2008)", "ref_id": "BIBREF3" }, { "start": 432, "end": 460, "text": "Dalianis and Jongejan (2006)", "ref_id": "BIBREF12" }, { "start": 683, "end": 705, "text": "Tsuruoka et al. (2008)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "4" }, { "text": "We have presented a discriminative approach for generating candidates for string transformation. Unlike conventional spelling-correction tasks, this study did not assume a fixed set of destination strings (e.g. correct words), but could even generate unseen candidate strings. We used an L 1 -regularized logistic regression model with substring-substitution features so that candidate strings for a given string can be enumerated using the efficient algorithm. The results of experiments described herein showed remarkable improvements and usefulness of the proposed approach in three tasks: normalization of orthographic variants, noun derivation, and lemmatization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The method presented in this paper allows only one region of change in string transformation. A natural extension of this study is to handle multiple regions of changes for morphologically rich languages (e.g. German) and to handle changes at the phrase/term level (e.g., \"estrogen receptor\" and \"receptor of oestrogen\"). Another direction would be to incorporate the methodologies for semisupervised machine learning to accommodate situa-tions in which positive instances and/or unlabeled strings are insufficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The number of letters for a substitution rule r = (\u03b1, \u03b2) is defined as the sum of the quantities of letters in \u03b1 and \u03b2, i.e., |\u03b1| + |\u03b2|. We determined the threshold \u03b8 = 12 experimentally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "UMLS SPECIALIST Lexicon:http://specialist.nlm.nih.gov/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used CST's lemmatiser version 2.13: http://www.cst.dk/online/lemmatiser/uk/ index.html 7 SVM for Multivariate Performance Measures (SVM perf ):http://svmlight.joachims.org/svm_perf.html8 We determined the parameter C = 500 experimentally; it controls the tradeoff between training error and margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by Grants-in-Aid for Scientific Research on Priority Areas (MEXT, Japan), and for Solution-Oriented Research for Science and Technology (JST, Japan).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The use of an association measure based on character structure to identify semantically related pairs of words and document titles", "authors": [ { "first": "W", "middle": [], "last": "George", "suffix": "" }, { "first": "Jillian", "middle": [], "last": "Adamson", "suffix": "" }, { "first": "", "middle": [], "last": "Boreham", "suffix": "" } ], "year": 1974, "venue": "Information Storage and Retrieval", "volume": "10", "issue": "7-8", "pages": "253--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "George W. Adamson and Jillian Boreham. 1974. The use of an association measure based on character struc- ture to identify semantically related pairs of words and document titles. Information Storage and Retrieval, 10(7-8):253-260.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning a spelling error model from search query logs", "authors": [ { "first": "Farooq", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP 2005)", "volume": "", "issue": "", "pages": "955--962", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farooq Ahmad and Grzegorz Kondrak. 2005. Learning a spelling error model from search query logs. In Pro- ceedings of the conference on Human Language Tech- nology and Empirical Methods in Natural Language Processing (HLT-EMNLP 2005), pages 955-962.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Scalable training of L 1 -regularized log-linear models", "authors": [ { "first": "Galen", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable train- ing of L 1 -regularized log-linear models. In Proceed- ings of the 24th International Conference on Machine Learning (ICML 2007), pages 33-40.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Orthographic disambiguation incorporating transliterated probability", "authors": [ { "first": "Eiji", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "Takeshi", "middle": [], "last": "Imai", "suffix": "" }, { "first": "Kengo", "middle": [], "last": "Miyo", "suffix": "" }, { "first": "Kazuhiko", "middle": [], "last": "Ohe", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "48--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eiji Aramaki, Takeshi Imai, Kengo Miyo, and Kazuhiko Ohe. 2008. Orthographic disambiguation incorporat- ing transliterated probability. In Proceedings of the Third International Joint Conference on Natural Lan- guage Processing (IJCNLP 2008), pages 48-55.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A maximum entropy approach to natural language processing", "authors": [ { "first": "Adam", "middle": [ "L" ], "last": "Berger", "suffix": "" }, { "first": "Vincent", "middle": [ "J" ], "last": "Della Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Della Pietra", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "1", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Lin- guistics, 22(1):39-71.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Alignment-based discriminative string similarity", "authors": [], "year": null, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007)", "volume": "", "issue": "", "pages": "656--663", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alignment-based discriminative string similarity. In Proceedings of the 45th Annual Meeting of the Associ- ation of Computational Linguistics (ACL 2007), pages 656-663.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Adaptive duplicate detection using learnable string similarity measures", "authors": [ { "first": "Mikhail", "middle": [], "last": "Bilenko", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2003)", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Bilenko and Raymond J. Mooney. 2003. Adap- tive duplicate detection using learnable string simi- larity measures. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge dis- covery and data mining (KDD 2003), pages 39-48.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An improved error model for noisy channel spelling correction", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting on the Association for Computational Linguistics (ACL 2000)", "volume": "", "issue": "", "pages": "286--293", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on the As- sociation for Computational Linguistics (ACL 2000), pages 286-293.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 1995, "venue": "Computational Linguistics", "volume": "21", "issue": "4", "pages": "543--565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. Computational Linguistics, 21(4):543-565.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Memory-based context-sensitive spelling correction at web scale", "authors": [ { "first": "Andrew", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Fette", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA 2007)", "volume": "", "issue": "", "pages": "166--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Carlson and Ian Fette. 2007. Memory-based context-sensitive spelling correction at web scale. In Proceedings of the Sixth International Conference on Machine Learning and Applications (ICMLA 2007), pages 166-171.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving query spelling correction using web search results", "authors": [ { "first": "Qing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "181--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qing Chen, Mu Li, and Ming Zhou. 2007. Improv- ing query spelling correction using web search results. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning (EMNLP-CoNLL 2007), pages 181-189.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Handcrafted versus machine-learned inflectional rules: The euroling-siteseeker stemmer and cst's lemmatiser", "authors": [ { "first": "Hercules", "middle": [], "last": "Dalianis", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Jongejan", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2006)", "volume": "", "issue": "", "pages": "663--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hercules Dalianis and Bart Jongejan. 2006. Hand- crafted versus machine-learned inflectional rules: The euroling-siteseeker stemmer and cst's lemmatiser. In In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2006), pages 663-666.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "OCR error correction using a noisy channel model", "authors": [ { "first": "Okan", "middle": [], "last": "Kolak", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the second international conference on Human Language Technology Research (HLT 2002)", "volume": "", "issue": "", "pages": "257--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Okan Kolak and Philip Resnik. 2002. OCR error correc- tion using a noisy channel model. In Proceedings of the second international conference on Human Lan- guage Technology Research (HLT 2002), pages 257- 262.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cognates and word alignment in bitexts", "authors": [ { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Tenth Machine Translation Summit (MT Summit X)", "volume": "", "issue": "", "pages": "305--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grzegorz Kondrak. 2005. Cognates and word alignment in bitexts. In Proceedings of the Tenth Machine Trans- lation Summit (MT Summit X), pages 305-312.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Measures of distributional similarity", "authors": [ { "first": "Lillian", "middle": [ "Lee" ], "last": "", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999)", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the As- sociation for Computational Linguistics (ACL 1999), pages 25-32.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "authors": [ { "first": "Vladimir", "middle": [ "I" ], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet Physics Doklady", "volume": "10", "issue": "8", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707-710.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploring distributional similarity based models for query spelling correction", "authors": [ { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Muhua", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (Coling-ACL 2006)", "volume": "", "issue": "", "pages": "1025--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou. 2006. Exploring distributional similarity based models for query spelling correction. In Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th Annual Meeting of the Association for Computational Linguistics (Coling-ACL 2006), pages 1025-1032.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A conditional random field for discriminativelytrained finite-state string edit distance", "authors": [ { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI 2005)", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminatively- trained finite-state string edit distance. In Proceedings of the 21st Conference on Uncertainty in Artificial In- telligence (UAI 2005), pages 388-395.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bitext maps and alignment via pattern recognition", "authors": [ { "first": "I", "middle": [], "last": "", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Melamed", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "107--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107-130.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Applied morphological processing of English", "authors": [ { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Darren", "middle": [], "last": "Pearce", "suffix": "" } ], "year": 2001, "venue": "Natural Language Engineering", "volume": "7", "issue": "3", "pages": "207--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natu- ral Language Engineering, 7(3):207-223.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A guided tour to approximate string matching", "authors": [ { "first": "Gonzalo", "middle": [], "last": "Navarro", "suffix": "" } ], "year": 2001, "venue": "ACM Computing Surveys (CSUR)", "volume": "33", "issue": "1", "pages": "31--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM Computing Surveys (CSUR), 33(1):31-88.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Feature selection, L 1 vs. L 2 regularization, and rotational invariance", "authors": [ { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the twenty-first international conference on Machine learning (ICML 2004)", "volume": "", "issue": "", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Y. Ng. 2004. Feature selection, L 1 vs. L 2 regu- larization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning (ICML 2004), pages 78-85.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An algorithm for suffix stripping", "authors": [ { "first": "Martin", "middle": [ "F" ], "last": "Porter", "suffix": "" } ], "year": 1980, "venue": "", "volume": "14", "issue": "", "pages": "130--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130-137.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A mathematical theory of communication", "authors": [ { "first": "Claude", "middle": [ "E" ], "last": "Shannon", "suffix": "" } ], "year": 1948, "venue": "Bell System Technical Journal", "volume": "27", "issue": "3", "pages": "379--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27(3):379-423.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Regression shrinkage and selection via the lasso", "authors": [ { "first": "Robert", "middle": [], "last": "Tibshirani", "suffix": "" } ], "year": 1996, "venue": "Journal of the Royal Statistical Society. Series B (Methodological)", "volume": "58", "issue": "1", "pages": "267--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Tibshirani. 1996. Regression shrinkage and se- lection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267-288.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Normalizing biomedical terms by minimizing ambiguity and variability", "authors": [ { "first": "Yoshimasa", "middle": [], "last": "Tsuruoka", "suffix": "" }, { "first": "John", "middle": [], "last": "Mcnaught", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Ananiadou", "suffix": "" } ], "year": 2008, "venue": "BMC Bioinformatics", "volume": "3", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshimasa Tsuruoka, John McNaught, and Sophia Ana- niadou. 2008. Normalizing biomedical terms by min- imizing ambiguity and variability. BMC Bioinformat- ics, Suppl 3(9):S2.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Spelling correction in the PubMed search engine", "authors": [ { "first": "John", "middle": [], "last": "Wilbur", "suffix": "" }, { "first": "Won", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2006, "venue": "formation Retrieval", "volume": "9", "issue": "", "pages": "543--564", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wilbur, Won Kim, and Natalie Xie. 2006. Spelling correction in the PubMed search engine. In- formation Retrieval, 9(5):543-564.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Number of active features and performance." }, "TABREF1": { "html": null, "type_str": "table", "num": null, "text": "positive instances Input: V : a suffix array of all strings (vocabulary) Output: D \u2212 : negative instances Output: R: substitution rules (features)", "content": "
1D \u2212 = [];
2R = {};
3foreach d \u2208 D + do
4foreach r \u2208 features(d) do
5add r to R;
6end
7end
8foreach r \u2208 R do
9S \u2190 V .search(r.src);
10foreach s \u2208 S do
11t \u2190 r.apply(s);
12if (s, t) / \u2208 D + then
13if t \u2208 V then
14append (s, t) to D \u2212 ;
15end
16end
17end
18end
19return D \u2212 , R;
" }, "TABREF3": { "html": null, "type_str": "table", "num": null, "text": "Excerpt of tables in the SPECIALIST Lexicon.", "content": "
Data set# +# -# Rules
Orthography15,83033,29611,098
Derivation12,98885,9285,688
Inflection113,215 124,74732,278
" }, "TABREF4": { "html": null, "type_str": "table", "num": null, "text": "Characteristics of datasets.", "content": "" }, "TABREF5": { "html": null, "type_str": "table", "num": null, "text": ".319 .871 .467 .004 .006 .005 .484 .679 .565 Levenshtein distance .323 .999 .488 .131 1.00 .232 .479 .988 .646 Normalized Levenshtein distance .441 .847 .580 .133 .990 .235 .598 .770 .673 Dice coefficient (letter bigram) .401 .918 .558 .137 .984 .240 .476 1.00 .645 LCSR .322 1.00 .487 .156 .841 .263 .476 1.00 .645 PREFIX .418 .927 .576 .140 .943 .244 .476 1.00 .645 Porter stemmer", "content": "
SystemOrthographyDerivationInflection
PRF1PRF1PRF1
Levenshtein distance (\u03b4 = 1)
" }, "TABREF6": { "html": null, "type_str": "table", "num": null, "text": ".119 .008 .016 .383 .682 .491 .821 .176 .290 Proposed method .941 .898 .919 .896 .880 .888 .985 .986 .984 Substitution rules trained with SVM .943 .890 .916 .894 .886 .890 .980 .987 .983 + LD, NLD, DICE, LCSR, PREFIX .946 .906 .926 .894 .886 .890 .980 .987 .983", "content": "" }, "TABREF7": { "html": null, "type_str": "table", "num": null, "text": "Performance of candidate classification", "content": "
Rank Src Dst WeightExamples
1 ussus9.81 focussing
2 aevev9.56 mediaeval
3 aen en9.53 ozaena
4 iae$ ae$9.44 gadoviae
5 nnini9.16 prorennin
6 nne ne8.84 connexus
7 ouror8.54 colour
8 aeaea8.31 paean
9 aeu eu8.22 stomodaeum
10 ooll ool7.79 woollen
" }, "TABREF8": { "html": null, "type_str": "table", "num": null, "text": "Feature weights for the Orthography set", "content": "" }, "TABREF9": { "html": null, "type_str": "table", "num": null, "text": "Performance of string transformation", "content": "
0.99
Spelling variation
0.985
0.98
F1 score0.975
0.97
0.965
0.96
01000200030004000500060007000
Number of active features (with non-zero weights)
" } } } }