{ "paper_id": "P12-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:28:38.580316Z" }, "title": "String Re-writing Kernel", "authors": [ { "first": "Fan", "middle": [], "last": "Bu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research Asia", "location": { "addrLine": "No. 5 Danling Street", "postCode": "100080", "settlement": "Beijing", "country": "China" } }, "email": "hangli@microsoft.com" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Learning for sentence rewriting is a fundamental task in natural language processing and information retrieval. In this paper, we propose a new class of kernel functions, referred to as string rewriting kernel, to address the problem. A string rewriting kernel measures the similarity between two pairs of strings, each pair representing rewriting of a string. It can capture the lexical and structural similarity between two pairs of sentences without the need of constructing syntactic trees. We further propose an instance of string rewriting kernel which can be computed efficiently. Experimental results on benchmark datasets show that our method can achieve better results than state-of-the-art methods on two sentence rewriting learning tasks: paraphrase identification and recognizing textual entailment.", "pdf_parse": { "paper_id": "P12-1047", "_pdf_hash": "", "abstract": [ { "text": "Learning for sentence rewriting is a fundamental task in natural language processing and information retrieval. In this paper, we propose a new class of kernel functions, referred to as string rewriting kernel, to address the problem. A string rewriting kernel measures the similarity between two pairs of strings, each pair representing rewriting of a string. It can capture the lexical and structural similarity between two pairs of sentences without the need of constructing syntactic trees. We further propose an instance of string rewriting kernel which can be computed efficiently. Experimental results on benchmark datasets show that our method can achieve better results than state-of-the-art methods on two sentence rewriting learning tasks: paraphrase identification and recognizing textual entailment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning for sentence re-writing is a fundamental task in natural language processing and information retrieval, which includes paraphrasing, textual entailment and transformation between query and document title in search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The key question here is how to represent the rewriting of sentences. In previous research on sentence re-writing learning such as paraphrase identification and recognizing textual entailment, most representations are based on the lexicons (Zhang and Patrick, 2005; Lintean and Rus, 2011; de Marneffe et al., 2006) or the syntactic trees (Das and Smith, 2009; Heilman and Smith, 2010) of the sentence pairs.", "cite_spans": [ { "start": 240, "end": 265, "text": "(Zhang and Patrick, 2005;", "ref_id": "BIBREF27" }, { "start": 266, "end": 288, "text": "Lintean and Rus, 2011;", "ref_id": "BIBREF15" }, { "start": 289, "end": 314, "text": "de Marneffe et al., 2006)", "ref_id": "BIBREF7" }, { "start": 338, "end": 353, "text": "(Das and Smith,", "ref_id": null }, { "start": 360, "end": 384, "text": "Heilman and Smith, 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In (Lin and Pantel, 2001 ; Barzilay and Lee, 2003) , re-writing rules serve as underlying representations for paraphrase generation/discovery. Motivated by the work, we represent re-writing of sentences by all possible re-writing rules that can be applied into it. For example, in Fig. 1, (A) is one re-writing rule that can be applied into the sentence re-writing (B). Specifically, we propose a new class of kernel functions (Sch\u00f6lkopf and Smola, 2002) , called string rewriting kernel (SRK), which defines the similarity between two re-writings (pairs) of strings as the inner product between them in the feature space induced by all the re-writing rules. SRK is different from existing kernels in that it is for re-writing and defined on two pairs of strings. SRK can capture the lexical and structural similarity between re-writings of sentences and does not need to parse the sentences and create the syntactic trees of them.", "cite_spans": [ { "start": 3, "end": 24, "text": "(Lin and Pantel, 2001", "ref_id": "BIBREF14" }, { "start": 27, "end": 50, "text": "Barzilay and Lee, 2003)", "ref_id": "BIBREF1" }, { "start": 427, "end": 454, "text": "(Sch\u00f6lkopf and Smola, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 281, "end": 292, "text": "Fig. 1, (A)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One challenge for using SRK lies in the high computational cost of straightforwardly computing the kernel, because it involves two re-writings of strings (i.e., four strings) and a large number of re-writing rules. We are able to develop an instance of SRK, referred to as kb-SRK, which directly computes the number of common rewriting rules without explic-itly calculating the inner product between feature vectors, and thus drastically reduce the time complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results on benchmark datasets show that SRK achieves better results than the state-ofthe-art methods in paraphrase identification and recognizing textual entailment. Note that SRK is very flexible to the formulations of sentences. For example, informally written sentences such as long queries in search can also be effectively handled.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The string kernel function, first proposed by Lodhi et al. (2002) , measures the similarity between two strings by their shared substrings. Leslie et al. (2002) proposed the k-spectrum kernel which represents strings by their contiguous substrings of length k. Leslie et al. (2004) further proposed a number of string kernels including the wildcard kernel to facilitate inexact matching between the strings. The string kernels defined on two pairs of objects (including strings) were also developed, which decompose the similarity into product of similarities between individual objects using tensor product (Basilico and Hofmann, 2004; Ben-Hur and Noble, 2005) or Cartesian product (Kashima et al., 2009) .", "cite_spans": [ { "start": 46, "end": 65, "text": "Lodhi et al. (2002)", "ref_id": "BIBREF18" }, { "start": 140, "end": 160, "text": "Leslie et al. (2002)", "ref_id": "BIBREF16" }, { "start": 261, "end": 281, "text": "Leslie et al. (2004)", "ref_id": "BIBREF17" }, { "start": 608, "end": 636, "text": "(Basilico and Hofmann, 2004;", "ref_id": "BIBREF2" }, { "start": 637, "end": 661, "text": "Ben-Hur and Noble, 2005)", "ref_id": "BIBREF3" }, { "start": 683, "end": 705, "text": "(Kashima et al., 2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The task of paraphrasing usually consists of paraphrase pattern generation and paraphrase identification. Paraphrase pattern generation is to automatically extract semantically equivalent patterns (Lin and Pantel, 2001; Bhagat and Ravichandran, 2008) or sentences (Barzilay and Lee, 2003) . Paraphrase identification is to identify whether two given sentences are a paraphrase of each other. The methods proposed so far formalized the problem as classification and used various types of features such as bag-of-words feature, edit distance (Zhang and Patrick, 2005) , dissimilarity kernel (Lintean and Rus, 2011) predicate-argument structure (Qiu et al., 2006) , and tree edit model (which is based on a tree kernel) (Heilman and Smith, 2010) in the classification task. Among the most successful methods, Wan et al. (2006) enriched the feature set by the BLEU metric and dependency relations. Das and Smith (2009) used the quasi-synchronous grammar formalism to incorporate features from WordNet, named entity recognizer, POS tagger, and dependency la-bels from aligned trees.", "cite_spans": [ { "start": 197, "end": 219, "text": "(Lin and Pantel, 2001;", "ref_id": "BIBREF14" }, { "start": 220, "end": 250, "text": "Bhagat and Ravichandran, 2008)", "ref_id": "BIBREF4" }, { "start": 264, "end": 288, "text": "(Barzilay and Lee, 2003)", "ref_id": "BIBREF1" }, { "start": 540, "end": 565, "text": "(Zhang and Patrick, 2005)", "ref_id": "BIBREF27" }, { "start": 589, "end": 612, "text": "(Lintean and Rus, 2011)", "ref_id": "BIBREF15" }, { "start": 642, "end": 660, "text": "(Qiu et al., 2006)", "ref_id": "BIBREF21" }, { "start": 717, "end": 742, "text": "(Heilman and Smith, 2010)", "ref_id": "BIBREF11" }, { "start": 806, "end": 823, "text": "Wan et al. (2006)", "ref_id": "BIBREF25" }, { "start": 894, "end": 914, "text": "Das and Smith (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The task of recognizing textual entailment is to decide whether the hypothesis sentence can be entailed by the premise sentence (Giampiccolo et al., 2007) . In recognizing textual entailment, de Marneffe et al. (2006) classified sentences pairs on the basis of word alignments. MacCartney and Manning (2008) used an inference procedure based on natural logic and combined it with the methods by de Marneffe et al. (2006) . Harmeling (2007) and Heilman and Smith (2010) classified sequence pairs based on transformation on syntactic trees. Zanzotto et al. (2007) used a kernel method on syntactic tree pairs (Moschitti and Zanzotto, 2007) .", "cite_spans": [ { "start": 128, "end": 154, "text": "(Giampiccolo et al., 2007)", "ref_id": "BIBREF9" }, { "start": 195, "end": 217, "text": "Marneffe et al. (2006)", "ref_id": "BIBREF7" }, { "start": 293, "end": 307, "text": "Manning (2008)", "ref_id": "BIBREF19" }, { "start": 398, "end": 420, "text": "Marneffe et al. (2006)", "ref_id": "BIBREF7" }, { "start": 423, "end": 439, "text": "Harmeling (2007)", "ref_id": "BIBREF10" }, { "start": 444, "end": 468, "text": "Heilman and Smith (2010)", "ref_id": "BIBREF11" }, { "start": 539, "end": 561, "text": "Zanzotto et al. (2007)", "ref_id": "BIBREF26" }, { "start": 607, "end": 637, "text": "(Moschitti and Zanzotto, 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We formalize sentence re-writing learning as a kernel method. Following the literature of string kernel, we use the terms \"string\" and \"character\" instead of \"sentence\" and \"word\". Suppose that we are given training data consisting of re-writings of strings and their responses", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "((s 1 ,t 1 ), y 1 ), ..., ((s n ,t n ), y n ) \u2208 (\u03a3 * \u00d7 \u03a3 * ) \u00d7 Y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "where \u03a3 denotes the character set, \u03a3 * = \u221e i=0 \u03a3 i denotes the string set, which is the Kleene closure of set \u03a3, Y denotes the set of responses, and n is the number of instances. (s i ,t i ) is a re-writing consisting of the source string s i and the target string t i . y i is the response which can be a category, ordinal number, or real number. In this paper, for simplicity we assume that Y = {\u00b11} (e.g. paraphrase/nonparaphrase). Given a new string re-writing (s,t) \u2208 \u03a3 * \u00d7 \u03a3 * , our goal is to predict its response y. That is, the training data consists of binary classes of string re-writings, and the prediction is made for the new re-writing based on learning from the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "We take the kernel approach to address the learning task. The kernel on re-writings of strings is defined as K :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "(\u03a3 * \u00d7 \u03a3 * ) \u00d7 (\u03a3 * \u00d7 \u03a3 * ) \u2192 R satisfying for all (s i ,t i ), (s j ,t j ) \u2208 \u03a3 * \u00d7 \u03a3 * , K((s i ,t i ), (s j ,t j )) = \u03a6(s i ,t i ), \u03a6(s j ,t j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "where \u03a6 maps each re-writing (pair) of strings into a high dimensional Hilbert space H , referred to as feature space. By the representer theorem (Kimeldorf and Wahba, 1971; Sch\u00f6lkopf and Smola, 2002) , it can be shown that the response y of a new string re-writing (s,t) can always be represented as", "cite_spans": [ { "start": 146, "end": 173, "text": "(Kimeldorf and Wahba, 1971;", "ref_id": "BIBREF13" }, { "start": 174, "end": 200, "text": "Sch\u00f6lkopf and Smola, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "y = sign( n \u2211 i=1 \u03b1 i y i K((s i ,t i ), (s,t)))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "\u03b1 i \u2265 0, (i = 1, \u2022 \u2022 \u2022 , n) are parameters. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "it is determined by a linear combination of the similarities between the new instance and the instances in training set. It is also known that by employing a learning model such as SVM (Vapnik, 2000) , such a linear combination can be automatically learned by solving a quadratic optimization problem. The question then becomes how to design the kernel function for the task.", "cite_spans": [ { "start": 185, "end": 199, "text": "(Vapnik, 2000)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Kernel Approach to Sentence Re-Writing Learning", "sec_num": "3" }, { "text": "Let \u03a3 be the set of characters and \u03a3 * be the set of strings. Let wildcard domain D \u2286 \u03a3 * be the set of strings which can be replaced by wildcards. The string re-writing kernel measures the similarity between two string re-writings through the rewriting rules that can be applied into them. Formally, given re-writing rule set R and wildcard domain D, the string re-writing kernel (SRK) is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K((s 1 ,t 1 ), (s 2 ,t 2 )) = \u03a6(s 1 ,t 1 ), \u03a6(s 2 ,t 2 )", "eq_num": "(1)" } ], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "\u03a6(s,t) = (\u03c6 r (s,t)) r\u2208R and \u03c6 r (s,t) = n\u03bb i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "where n is the number of contiguous substring pairs of (s,t) that re-writing rule r matches, i is the number of wildcards in r, and \u03bb \u2208 (0, 1] is a factor punishing each occurrence of wildcard. A re-writing rule is defined as a triple r = (\u03b2 s , \u03b2 t , \u03c4) where \u03b2 s ,\u03b2 t \u2208 (\u03a3 \u222a { * }) * denote source and target string patterns and \u03c4 \u2286 ind * (\u03b2 s )\u00d7ind * (\u03b2 t ) denotes the alignments between the wildcards in the two string patterns. Here ind * (\u03b2 ) denotes the set of indexes of wildcards in \u03b2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "We say that a re-writing rule (\u03b2 s , \u03b2 t , \u03c4) matches a string pair (s,t), if and only if string patterns \u03b2 s and \u03b2 t can be changed into s and t respectively by substituting each wildcard in the string patterns with an element in the strings, where the elements are defined in the wildcard domain D and the wildcards", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "\u03b2 s [i] and \u03b2 t [ j]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "are substituted by the same elements, when there is an alignment (i, j) \u2208 \u03c4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "For example, the re-writing rule in Fig. 1 (A) can be formally written as r = (\u03b2 s, \u03b2t, \u03c4) where \u03b2 s = ( * , wrote, * ), \u03b2t = ( * , was, written, by, * ) and \u03c4 = {(1, 5), (3, 1)}. It matches with the string pair in Fig. 1 (B) .", "cite_spans": [], "ref_spans": [ { "start": 36, "end": 42, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 215, "end": 225, "text": "Fig. 1 (B)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "String re-writing kernel is a class of kernels which depends on re-writing rule set R and wildcard domain D. Here we provide some examples. Obviously, the effectiveness and efficiency of SRK depend on the choice of R and D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "Example 1. We define the pairwise k-spectrum kernel (ps-SRK) K ps k as the re-writing rule kernel un-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "der R = {(\u03b2 s , \u03b2 t , \u03c4)|\u03b2 s , \u03b2 t \u2208 \u03a3 k , \u03c4 = / 0} and any D. It can be shown that K ps k ((s 1 ,t 1 ), (s 2 ,t 2 )) = K spec k (s 1 , s 2 )K spec k (t 1 ,t 2 ) where K spec k (x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "is equivalent to the k-spectrum kernel proposed by Leslie et al. (2002) . ", "cite_spans": [ { "start": 51, "end": 71, "text": "Leslie et al. (2002)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "R = {(\u03b2 s , \u03b2 t , \u03c4)|\u03b2 s , \u03b2 t \u2208 (\u03a3\u222a{ * }) k , \u03c4 = / 0} and D = \u03a3. It can be shown that K pw k ((s 1 ,t 1 ), (s 2 ,t 2 )) = K wc (k,k) (s 1 , s 2 )K wc (k,k) (t 1 ,t 2 ) where K wc (k,k) (x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "is a special case (m=k) of the (k,m)-wildcard kernel proposed by Leslie et al. (2004) .", "cite_spans": [ { "start": 65, "end": 85, "text": "Leslie et al. (2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "Both kernels shown above are represented as the product of two kernels defined separately on strings s 1 , s 2 and t 1 ,t 2 , and that is to say that they do not consider the alignment relations between the strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Re-writing Kernel", "sec_num": "4" }, { "text": "Next we propose another instance of string rewriting kernel, called the k-gram bijective string rewriting kernel (kb-SRK). As will be seen, kb-SRK can be computed efficiently, although it is defined on two pairs of strings and is not decomposed (note that ps-SRK and pw-SRK are decomposed).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "K-gram Bijective String Re-writing Kernel", "sec_num": "5" }, { "text": "The kb-SRK has the following properties: (1) A wildcard can only substitute a single character, denoted as \"?\". (2) The two string patterns in a rewriting rule are of length k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "(3) The alignment relation in a re-writing rule is bijective, i.e., there is a one-to-one mapping between the wildcards in the string patterns. Formally, the k-gram bijective string re-writing kernel K k is defined as a string re-writing kernel under the re-writing rule set R =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "{(\u03b2 s , \u03b2 t , \u03c4)|\u03b2 s , \u03b2 t \u2208 (\u03a3 \u222a {?}) k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": ", \u03c4 is bijective} and the wildcard domain D = \u03a3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "Since each re-writing rule contains two string patterns of length k and each wildcard can only substitute one character, a re-writing rule can only match k-gram pairs in (s,t). We can rewrite Eq. 2as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "\u03c6 r (s,t) = \u2211 \u03b1 s \u2208k-grams(s) \u2211 \u03b1 t \u2208k-grams(t)\u03c6 r (\u03b1 s , \u03b1 t ) (3) where\u03c6 r (\u03b1 s , \u03b1 t ) = \u03bb i if r (with i wildcards) matches (\u03b1 s , \u03b1 t ), otherwise\u03c6 r (\u03b1 s , \u03b1 t ) = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "For ease of computation, we re-write kb-SRK as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K k ((s 1 ,t 1 ), (s 2 ,t 2 )) = \u2211 \u03b1s 1 \u2208 k-grams(s 1 ) \u03b1t 1 \u2208 k-grams(t 1 ) \u2211 \u03b1s 2 \u2208 k-grams(s 2 ) \u03b1t 2 \u2208 k-grams(t 2 )K k ((\u03b1 s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 ))", "eq_num": "(4)" } ], "section": "Definition", "sec_num": "5.1" }, { "text": "whereK", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k = \u2211 r\u2208R\u03c6 r (\u03b1 s 1 , \u03b1 t 1 )\u03c6 r (\u03b1 s 2 , \u03b1 t 2 )", "eq_num": "(5)" } ], "section": "Definition", "sec_num": "5.1" }, { "text": "A straightforward computation of kb-SRK would be intractable. The computation of K k in Eq. (4) needs computations ofK k conducted O((n \u2212 k + 1) 4 ) times, where n denotes the maximum length of strings. Furthermore, the computation ofK k in Eq. (5) needs to perform matching of all the rewriting rules with the two k-gram pairs (\u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Computing Kernel", "sec_num": "5.2" }, { "text": "s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Computing Kernel", "sec_num": "5.2" }, { "text": ", which has time complexity O(k!).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Computing Kernel", "sec_num": "5.2" }, { "text": "In this section, we will introduce an efficient algorithm, which can computeK k and K k with the time complexities of O(k) and O(kn 2 ), respectively. The latter is verified empirically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm for Computing Kernel", "sec_num": "5.2" }, { "text": "For ease of manipulation, our method transforms the computation of kernel on k-grams into the computation on a new data structure called lists of doubles. We first explain how to make the transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "Suppose that \u03b1 1 , \u03b1 2 \u2208 \u03a3 k are k-grams, we use \u03b1 1 [i] and \u03b1 2 [i] to represent the i-th characters of them. We call a pair of characters a double. Thus \u03a3 \u00d7 \u03a3 denotes the set of doubles and \u03b1 D s , \u03b1 D t \u2208 (\u03a3 \u00d7 \u03b1 1 = abbccbb ; \u03b1 2 = abcccdd; \u03b1 1 = cbcbbcb ; \u03b1 2 = cbccdcd; \u03a3) k denote lists of doubles. The following operation combines two k-grams into a list of doubles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "\u03b1 1 \u2297 \u03b1 2 = ((\u03b1 1 [1], \u03b1 2 [1]), \u2022 \u2022 \u2022 , (\u03b1 1 [k], \u03b1 2 [k])).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "We denotes \u03b1 1 \u2297 \u03b1 2 [i] as the i-th element of the list. Fig. 3 shows example lists of doubles combined from k-grams.", "cite_spans": [], "ref_spans": [ { "start": 58, "end": 64, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "We introduce the set of identical doubles I = {(c, c)|c \u2208 \u03a3} and the set of non-identical doubles N = {(c, c )|c, c \u2208 \u03a3 and c = c }. Obviously, I N = \u03a3 \u00d7 \u03a3 and I N = / 0. We define the set of re-writing rules for double lists ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "R D = {r D = (\u03b2 D s , \u03b2 D t , \u03c4)|\u03b2 D s , \u03b2 D t \u2208 (I \u222a {?}) k , \u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "(\u03b1 D s , \u03b1 D t ) iff. \u03b2 D s , \u03b2 D t can be changed into \u03b1 D s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "and \u03b1 D t by substituting each wildcard pair to a double in \u03a3 \u00d7 \u03a3 , and the double substituting the wildcard pair \u03b2 D s [i] and \u03b2 D t [ j] must be an identical double when there is an alignment (i, j) \u2208 \u03c4. The rule set defined here and the rule set in Sec. 4 only differ on the elements where re-writing occurs. Fig. 4 (B) shows an example of re-writing rule for double lists. The pair of double lists in Fig. 3 can match with the re-writing rule.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 322, "text": "Fig. 4 (B)", "ref_id": null }, { "start": 405, "end": 411, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Transformation of Problem", "sec_num": "5.2.1" }, { "text": "We consider how to computeK k by extending the computation from k-grams to double lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "The following lemma shows that computing the weighted sum of re-writing rules matching k-gram pairs (\u03b1 s 1 , \u03b1 t 1 ) and (\u03b1 s 2 , \u03b1 t 2 ) is equivalent to computing the weighted sum of re-writing rules for double lists matching Lemma 1. For any two k-gram pairs (\u03b1 s 1 , \u03b1 t 1 ) and (\u03b1 s 2 , \u03b1 t 2 ), there exists a one-to-one mapping from the set of re-writing rules matching them to the set of re-writing rules matching the corresponding double", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "(\u03b1 s 1 \u2297 \u03b1 s 2 , \u03b1 t 1 \u2297 \u03b1 t 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "lists (\u03b1 s 1 \u2297 \u03b1 s 2 , \u03b1 t 1 \u2297 \u03b1 t 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "The re-writing rule in Fig. 4 (A) matches the kgram pairs in Fig. 2 . Equivalently, the re-writing rule for double lists in Fig. 4 (B) matches the pair of double lists in Fig. 3 . By lemma 1 and Eq. 5, we haveK", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 33, "text": "Fig. 4 (A)", "ref_id": null }, { "start": 61, "end": 67, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 124, "end": 134, "text": "Fig. 4 (B)", "ref_id": null }, { "start": 171, "end": 177, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k = \u2211 r D \u2208R D\u03c6 r D (\u03b1 s 1 \u2297 \u03b1 s 2 , \u03b1 t 1 \u2297 \u03b1 t 2 )", "eq_num": "(6)" } ], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "where\u03c6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "r D (\u03b1 D s , \u03b1 D t ) = \u03bb 2i if the rewriting rule for double lists r D with i wildcards matches (\u03b1 D s , \u03b1 D t ), otherwise\u03c6 r D (\u03b1 D s , \u03b1 D t ) = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "To getK k , we just need to compute the weighted sum of re-writing rules for double lists matching (\u03b1 s 1 \u2297 \u03b1 s 2 , \u03b1 t 1 \u2297 \u03b1 t 2 ). Thus, we can work on the \"combined\" pair of double lists instead of two pairs of k-grams.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "Instead of enumerating all possible re-writing rules and checking whether they can match the given pair of double lists, we only calculate the number of possibilities of \"generating\" from the pair of double lists to the re-writing rules matching it, which can be carried out efficiently. We say that a re-writing rule of double lists can be generated from a pair of double lists (\u03b1 D s , \u03b1 D t ), if they match with each other. From the definition of R D , in each generation, the identical doubles in \u03b1 D s and \u03b1 D t can be either or not substituted by an aligned wildcard pair in the re-writing Let e be a double. We denote # e (\u03b1 D ) as the number of times e occurs in the list of doubles \u03b1 D . Also, for a set of doubles S \u2286 \u03a3 \u00d7 \u03a3, we denote # S (\u03b1 D ) as a vector in which each element represents # e (\u03b1 D ) of each double e \u2208 S. We can find a function g such thatK", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "Algorithm 1: ComputingK k Input: k-gram pair (\u03b1 s 1 , \u03b1 t 1 ) and (\u03b1 s 2 , \u03b1 t 2 ) Output:K k ((\u03b1 s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 )) 1 Set (\u03b1 D s , \u03b1 D t ) = (\u03b1 s 1 \u2297 \u03b1 s 2 , \u03b1 t 1 \u2297 \u03b1 t 2 ) ; 2 Compute # \u03a3\u00d7\u03a3 (\u03b1 D s )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k = g(# \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 ), # \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 ))", "eq_num": "(7)" } ], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "Alg. 1 shows how to computeK k . # \u03a3\u00d7\u03a3 (.) is computed from the two pairs of k-grams in line 1-2. The final score is made through the iterative calculation on the two lists (lines 4-8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "The key of Alg. 1 is the calculation of g e based on a (e) i (line 7). Here we use a (e) i to denote the number of possibilities for which i pairs of aligned wildcards can be generated from e in both \u03b1 D s and \u03b1 D t . a (e) i can be computed as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "(1) If e \u2208 N and # e (\u03b1 D s ) = # e (\u03b1 D t ), then a (e) i = 0 for any i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "(2) If e \u2208 N and # e (\u03b1 D s ) = # e (\u03b1 D t ) = j, then a ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "i = # e (\u03b1 D s ) i # e (\u03b1 D t ) i i!.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "We next explain the rationale behind the above computations. In (1), since # e (\u03b1 D s ) = # e (\u03b1 D t ), it is impossible to generate a re-writing rule in which all the occurrences of non-identical double e are substituted by pairs of aligned wildcards. In (2), j pairs of aligned wildcards can be generated from all the occurrences of non-identical double e in both \u03b1 D s and \u03b1 D t . The number of combinations thus is j!. In (3), a pair of aligned wildcards can either be generated or not from a pair of identical doubles in \u03b1 D s and \u03b1 D t . We can select i occurrences of identical double e from \u03b1 D s , i occurrences from \u03b1 D t , and generate all possible aligned wildcards from them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "In the loop of lines 4-8, we only need to consider a (e)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "i for 0 \u2264 i \u2264 min{# e (\u03b1 D s ), # e (\u03b1 D t )}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": ", because a (e) i = 0 for the rest of i. To sum up, Eq. 7 can be computed as below, which is exactly the computation at lines 3-8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "g(# \u03a3\u00d7\u03a3 (\u03b1 D s ), # \u03a3\u00d7\u03a3 (\u03b1 D t )) = \u220f e\u2208\u03a3\u00d7\u03a3 ( n e \u2211 i=0 a (e) i \u03bb 2i ) (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "For the k-gram pairs in Fig. 2 , we first create lists of doubles in Fig. 3 and compute # \u03a3\u00d7\u03a3 (\u2022) for them (lines 1-2 of Alg. 1), as shown in Fig. 5 . We next compute K k from # \u03a3\u00d7\u03a3 (\u03b1 D s ) and # \u03a3\u00d7\u03a3 (\u03b1 D t ) in Fig. 5 (lines 3-8 of Alg. 1) and obtain K k = (1)(1 + \u03bb 2 )(\u03bb 2 )(2\u03bb 4 )(1 + 6\u03bb 2 + 6\u03bb 4 ) = 12\u03bb 12 + 24\u03bb 10 + 14\u03bb 8 + 2\u03bb 6 .", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 30, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 69, "end": 75, "text": "Fig. 3", "ref_id": null }, { "start": 142, "end": 148, "text": "Fig. 5", "ref_id": null }, { "start": 213, "end": 230, "text": "Fig. 5 (lines 3-8", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "ComputingK k", "sec_num": "5.2.2" }, { "text": "Algorithm 2 shows how to compute K k . It prepares two maps m s and m t and two vectors of counters c s and c t . In m s and m t , each key # N (.) maps a set of values # \u03a3\u00d7\u03a3 (.). Counters c s and c t count the frequency of each # \u03a3\u00d7\u03a3 (.). Recall that # N (\u03b1 s 1 \u2297 \u03b1 s 2 ) denotes a vector whose element is # e (\u03b1 s 1 \u2297 \u03b1 s 2 ) for e \u2208 N. # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 ) denotes a vector whose element is # e (\u03b1 s 1 \u2297 \u03b1 s 2 ) where e is any possible double.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "One can easily verify the output of the algorithm is exactly the value of K k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "First,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "K k ((\u03b1 s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 )) = 0 if # N (\u03b1 s 1 \u2297 \u03b1 s 2 ) = # N (\u03b1 t 1 \u2297 \u03b1 t 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": ". Therefore, we only need to consider those \u03b1 s 1 \u2297 \u03b1 s 2 and \u03b1 t 1 \u2297 \u03b1 t 2 which have the same key (lines 10-13). We group the k-gram pairs by their key in lines 2-5 and lines 6-9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "Moreover, the following relation holds", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "K k ((\u03b1 s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 )) =K k ((\u03b1 s 1 , \u03b1 t 1 ), (\u03b1 s 2 , \u03b1 t 2 )) if # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297\u03b1 s 2 ) = # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297\u03b1 s 2 ) and # \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 ) = # \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 ), where \u03b1 s 1 , \u03b1 s 2 , \u03b1 t 1 , \u03b1 t 2 are", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "Algorithm 2: Computing K k Input: string pair (s 1 ,t 1 ) and (s 2 ,t 2 ), window size k Output:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "K k ((s 1 ,t 1 ), (s 2 ,t 2 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "1 Initialize two maps m s and m t and two counters c s and c t ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "2 for each k-gram \u03b1 s 1 in s 1 do 3 for each k-gram \u03b1 s 2 in s 2 do 4 Update m s with key-value pair (# N (\u03b1 s 1 \u2297 \u03b1 s 2 ), # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 )); 5 c s [# \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 )] + + ; 6 for each k-gram \u03b1 t 1 in t 1 do 7 for each k-gram \u03b1 t 2 in t 2 do 8 Update m t with key-value pair (# N (\u03b1 t 1 \u2297 \u03b1 t 2 ), # \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 )); 9 c t [# \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 )] + + ; 10 for each key \u2208 m s .keys \u2229 m t .keys do 11 for each v s \u2208 m s [key] do 12 for each v t \u2208 m t [key] do 13 result+= c s [v s ]c t [v t ]g(v s , v t ) ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "14 return result; other k-grams. Therefore, we only need to take # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 ) and # \u03a3\u00d7\u03a3 (\u03b1 t 1 \u2297 \u03b1 t 2 ) as the value under each key and count its frequency. That is to say, # \u03a3\u00d7\u03a3 provides sufficient statistics for computingK k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "The quantity g(v s , v t ) in line 13 is computed by Alg. 1 (lines 3-8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computing K k", "sec_num": "5.2.3" }, { "text": "The time complexities of Alg. 1 and Alg. 2 are shown below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "For Alg. 1, lines 1-2 can be executed in O(k).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "The time for executing line 7 is less than # e (\u03b1 D s ) + # e (\u03b1 D t ) + 1 for each e satisfying # e (\u03b1 D s ) = 0 or # e (\u03b1 D t ) = 0 . Since \u2211 e\u2208\u03a3\u00d7\u03a3 # e (\u03b1 D s ) = \u2211 e\u2208\u03a3\u00d7\u03a3 # e (\u03b1 D t ) = k, the time for executing lines 3-8 is less than 4k, which results in the O(k) time complexity of Alg. 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "For Alg. 2, we denote n = max{|s 1 |, |s 2 |, |t 1 |, |t 2 |}. It is easy to see that if the maps and counters in the algorithm are implemented by hash maps, the time complexities of lines 2-5 and lines 6-9 are O(kn 2 ). However, analyzing the time complexity of lines 10- 13 is quite difficult.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "Lemma 2 and Theorem 1 provide an upper bound of the number of times computing g(v s , v t ) in line 13, denoted as C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "Lemma 2. For \u03b1 s 1 \u2208k-grams(s 1 ) and \u03b1 s 2 , \u03b1 s 2 \u2208k- grams(s 2 ), we have # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 ) = # \u03a3\u00d7\u03a3 (\u03b1 s 1 \u2297 \u03b1 s 2 ) if # N (\u03b1 s 1 \u2297 \u03b1 s 2 ) = # N (\u03b1 s 1 \u2297 \u03b1 s 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "Theorem 1. C is O(n 3 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "By Lemma 2, each m s [key] contains at most n \u2212 k + 1 elements. Together with the fact that \u2211 key m s [key] = (n \u2212 k + 1) 2 , Theorem 1 is proved. It can be also proved that C is O(n 2 ) when k = 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "Empirical study shows that O(n 3 ) is a loose upper bound for C. Let n avg denote the average length of s 1 , t 1 , s 2 and t 2 . Our experiment on all pairs of sentences on MSR Paraphrase (Fig. 6) shows that C is in the same order of n 2 avg in the worst case and C/n 2 avg decreases with increasing k in both average case and worst case, which indicates that C is O(n 2 ) and the overall time complexity of Alg. 2 is O(kn 2 ).", "cite_spans": [], "ref_spans": [ { "start": 189, "end": 197, "text": "(Fig. 6)", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Time Complexity", "sec_num": "5.3" }, { "text": "We evaluated the performances of the three types of string re-writing kernels on paraphrase identification and recognizing textual entailment: pairwise kspectrum kernel (ps-SRK), pairwise k-wildcard kernel (pw-SRK), and k-gram bijective string re-writing kernel (kb-SRK). We set \u03bb = 1 for all kernels. The performances were measured by accuracy (e.g. percentage of correct classifications).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "In both experiments, we used LIBSVM with default parameters (Chang et al., 2011) as the classifier. All the sentences in the training and test sets were segmented into words by the tokenizer at OpenNLP (Baldrige et al., ) . We further conducted stemming on the words with Iveonik English Stemmer (http://www.iveonik.com/ ). We normalized each kernel byK(x, y) =", "cite_spans": [ { "start": 60, "end": 80, "text": "(Chang et al., 2011)", "ref_id": "BIBREF5" }, { "start": 202, "end": 221, "text": "(Baldrige et al., )", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "K(x,y) \u221a K(x,x)K(y,y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "and then tried them under different window sizes k. We also tried to combine the kernels with two lexical features \"unigram precision and recall\" proposed in (Wan et al., 2006) , referred to as PR. For each kernel K, we tested the window size settings of K 1 + ... + K k max (k max \u2208 {1, 2, 3, 4}) together with the combination with PR and we report the best accuracies of them in Tab 1 and Tab 2.", "cite_spans": [ { "start": 158, "end": 176, "text": "(Wan et al., 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The task of paraphrase identification is to examine whether two sentences have the same meaning. We trained and tested all the methods on the MSR Paraphrase Corpus (Dolan and Brockett, 2005; Quirk et al., 2004) consisting of 4,076 sentence pairs for training and 1,725 sentence pairs for testing.", "cite_spans": [ { "start": 164, "end": 190, "text": "(Dolan and Brockett, 2005;", "ref_id": "BIBREF8" }, { "start": 191, "end": 210, "text": "Quirk et al., 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Paraphrase Identification", "sec_num": "6.1" }, { "text": "The experimental results on different SRKs are shown in Table 1 . It can be seen that kb-SRK outperforms ps-SRK and pw-SRK. The results by the state-of-the-art methods reported in previous work are also included in Table 1 . kb-SRK outperforms the existing lexical approach (Zhang and Patrick, 2005) and kernel approach (Lintean and Rus, 2011) . It also works better than the other approaches listed in the table, which use syntactic trees or dependency relations. Fig. 7 gives detailed results of the kernels under different maximum k-gram lengths k max with and without PR. The results of ps-SRK and pw-SRK without combining PR under different k are all below 71%, therefore they are not shown for clar-", "cite_spans": [ { "start": 274, "end": 299, "text": "(Zhang and Patrick, 2005)", "ref_id": "BIBREF27" }, { "start": 320, "end": 343, "text": "(Lintean and Rus, 2011)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 1", "ref_id": null }, { "start": 215, "end": 222, "text": "Table 1", "ref_id": null }, { "start": 465, "end": 471, "text": "Fig. 7", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Paraphrase Identification", "sec_num": "6.1" }, { "text": "Acc. Zhang and Patrick (2005) 71.9 Lintean and Rus (2011) 73.6 Heilman and Smith (2010) 73.2 Qiu et al. (2006) 72.0 Wan et al. (2006) 75.6 Das and Smith (2009) 73.9 Das and Smith (2009) ity. By comparing the results of kb-SRK and pw-SRK we can see that the bijective property in kb-SRK is really helpful for improving the performance (note that both methods use wildcards). Furthermore, the performances of kb-SRK with and without combining PR increase dramatically with increasing k max and reach the peaks (better than state-of-the-art) when k max is four, which shows the power of the lexical and structural similarity captured by kb-SRK.", "cite_spans": [ { "start": 5, "end": 29, "text": "Zhang and Patrick (2005)", "ref_id": "BIBREF27" }, { "start": 35, "end": 57, "text": "Lintean and Rus (2011)", "ref_id": "BIBREF15" }, { "start": 63, "end": 87, "text": "Heilman and Smith (2010)", "ref_id": "BIBREF11" }, { "start": 93, "end": 110, "text": "Qiu et al. (2006)", "ref_id": "BIBREF21" }, { "start": 116, "end": 133, "text": "Wan et al. (2006)", "ref_id": "BIBREF25" }, { "start": 139, "end": 159, "text": "Das and Smith (2009)", "ref_id": "BIBREF6" }, { "start": 165, "end": 185, "text": "Das and Smith (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Recognizing textual entailment is to determine whether a sentence (sometimes a short paragraph) can entail the other sentence (Giampiccolo et al., 2007) . RTE-3 is a widely used benchmark dataset. Following the common practice, we combined the development set of RTE-3 and the whole datasets of RTE-1 and RTE-2 as training data and took the test set of RTE-3 as test data. The train and test sets contain 3,767 and 800 sentence pairs. The results are shown in Table 2 . Again, kb-SRK outperforms ps-SRK and pw-SRK. As indicated in (Heilman and Smith, 2010) , the top-performing RTE systems are often built with significant engi-", "cite_spans": [ { "start": 126, "end": 152, "text": "(Giampiccolo et al., 2007)", "ref_id": "BIBREF9" }, { "start": 531, "end": 556, "text": "(Heilman and Smith, 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 460, "end": 467, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Recognizing Textual Entailment", "sec_num": "6.2" }, { "text": "Acc. Harmeling (2007) 59.5 de Marneffe et al. (2006) 60.5 M&M, (2007) (NL) 59.4 M&M, (2007) (Hybrid) 64.3 Zanzotto et al. (2007) 65.75 Heilman and Smith (2010) 62.8 Our baseline (PR) 62.0 Our method (ps-SRK) 64.6 Our method (pw-SRK) 63.8 Our method (kb-SRK) 65.1 neering efforts. Therefore, we only compare with the six systems which involves less engineering. kb-SRK still outperforms most of those state-of-the-art methods even if it does not exploit any other lexical semantic sources and syntactic analysis tools. Fig. 8 shows the results of the kernels under different parameter settings. Again, the results of ps-SRK and pw-SRK without combining PR are too low to be shown (all below 55%). We can see that PR is an effective method for this dataset and the overall performances are substantially improved after combining it with the kernels. The performance of kb-SRK reaches the peak when window size becomes two.", "cite_spans": [ { "start": 5, "end": 21, "text": "Harmeling (2007)", "ref_id": "BIBREF10" }, { "start": 30, "end": 52, "text": "Marneffe et al. (2006)", "ref_id": "BIBREF7" }, { "start": 63, "end": 69, "text": "(2007)", "ref_id": null }, { "start": 70, "end": 74, "text": "(NL)", "ref_id": null }, { "start": 85, "end": 100, "text": "(2007) (Hybrid)", "ref_id": null }, { "start": 106, "end": 128, "text": "Zanzotto et al. (2007)", "ref_id": "BIBREF26" }, { "start": 135, "end": 159, "text": "Heilman and Smith (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 518, "end": 524, "text": "Fig. 8", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In this paper, we have proposed a novel class of kernel functions for sentence re-writing, called string re-writing kernel (SRK). SRK measures the lexical and structural similarity between two pairs of sentences without using syntactic trees. The approach is theoretically sound and is flexible to formulations of sentences. A specific instance of SRK, referred to as kb-SRK, has been developed which can balance the effectiveness and efficiency for sentence re-writing. Experimental results show that kb-SRK achieve better results than state-of-the-art methods on paraphrase identification and recognizing textual entailment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "This work is supported by the National Basic Research Program (973 Program) No. 2012CB316301.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Learning to paraphrase: An unsupervised approach using multiple-sequence alignment", "authors": [ { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barzilay, R. and Lee, L. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. Proceedings of the 2003 Conference of the North American Chapter of the Association for Com- putational Linguistics on Human Language Technol- ogy, pp. 16-23.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unifying collaborative and content-based filtering", "authors": [ { "first": "J", "middle": [], "last": "Basilico", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the twenty-first international conference on Machine learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Basilico, J. and Hofmann, T. 2004. Unifying collab- orative and content-based filtering. Proceedings of the twenty-first international conference on Machine learning, pp. 9, 2004.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Kernel methods for predicting protein-protein interactions", "authors": [ { "first": "A", "middle": [], "last": "Ben-Hur", "suffix": "" }, { "first": "W", "middle": [ "S" ], "last": "Noble", "suffix": "" } ], "year": 2005, "venue": "Bioinformatics", "volume": "21", "issue": "", "pages": "38--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben-Hur, A. and Noble, W.S. 2005. Kernel methods for predicting protein-protein interactions. Bioinformat- ics, vol. 21, pp. i38-i46, Oxford Univ Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Large scale acquisition of paraphrases for learning surface patterns", "authors": [ { "first": "R", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "D", "middle": [], "last": "Ravichandran", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "674--682", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bhagat, R. and Ravichandran, D. 2008. Large scale ac- quisition of paraphrases for learning surface patterns. Proceedings of ACL-08: HLT, pp. 674-682.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "LIBSVM: A library for support vector machines", "authors": [ { "first": "C", "middle": [], "last": "Chang", "suffix": "" }, { "first": "C", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C. and Lin, C. 2011. LIBSVM: A library for sup- port vector machines. ACM Transactions on Intelli- gent Systems and Technology vol. 2, issue 3, pp. 27:1- 27:27. Software available at http://www.csie. ntu.edu.tw/\u02dccjlin/libsvm", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Paraphrase identification as probabilistic quasi-synchronous recognition", "authors": [ { "first": "D", "middle": [], "last": "Das", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "468--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, D. and Smith, N.A. 2009. Paraphrase identifi- cation as probabilistic quasi-synchronous recognition. Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 468-476.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to distinguish valid textual entailments", "authors": [ { "first": "M", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "B", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "T", "middle": [], "last": "Grenager", "suffix": "" }, { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "A", "middle": [], "last": "Rafferty", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proc. of the Second PASCAL Challenges Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Marneffe, M., MacCartney, B., Grenager, T., Cer, D., Rafferty A. and Manning C.D. 2006. Learning to dis- tinguish valid textual entailments. Proc. of the Second PASCAL Challenges Workshop.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automatically constructing a corpus of sentential paraphrases", "authors": [ { "first": "W", "middle": [ "B" ], "last": "Dolan", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2005, "venue": "Proc. of IWP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dolan, W.B. and Brockett, C. 2005. Automatically con- structing a corpus of sentential paraphrases. Proc. of IWP.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The third pascal recognizing textual entailment challenge", "authors": [ { "first": "D", "middle": [], "last": "Giampiccolo", "suffix": "" }, { "first": "B", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "B", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giampiccolo, D., Magnini B., Dagan I., and Dolan B., editors 2007. The third pascal recognizing textual en- tailment challenge. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 1-9.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An extensible probabilistic transformation-based approach to the third recognizing textual entailment challenge", "authors": [ { "first": "S", "middle": [], "last": "Harmeling", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", "volume": "", "issue": "", "pages": "137--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harmeling, S. 2007. An extensible probabilistic transformation-based approach to the third recogniz- ing textual entailment challenge. Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 137-142, 2007.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions", "authors": [ { "first": "M", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1011--1019", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heilman, M. and Smith, N.A. 2010. Tree edit models for recognizing textual entailments, paraphrases, and an- swers to questions. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguis- tics, pp. 1011-1019.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "On pairwise kernels: An efficient alternative and generalization analysis", "authors": [ { "first": "H", "middle": [], "last": "Kashima", "suffix": "" }, { "first": "S", "middle": [], "last": "Oyama", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yamanishi", "suffix": "" }, { "first": "K", "middle": [], "last": "Tsuda", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "1030--1037", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kashima, H. , Oyama, S. , Yamanishi, Y. and Tsuda, K. 2009. On pairwise kernels: An efficient alternative and generalization analysis. Advances in Knowledge Discovery and Data Mining, pp. 1030-1037, 2009, Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Some results on Tchebycheffian spline functions", "authors": [ { "first": "G", "middle": [], "last": "Kimeldorf", "suffix": "" }, { "first": "G", "middle": [], "last": "Wahba", "suffix": "" } ], "year": 1971, "venue": "Journal of Mathematical Analysis and Applications", "volume": "33", "issue": "1", "pages": "82--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimeldorf, G. and Wahba, G. 1971. Some results on Tchebycheffian spline functions. Journal of Mathemat- ical Analysis and Applications, Vol.33, No.1, pp.82- 95, Elsevier.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "DIRT-discovery of inference rules from text", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "P", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, D. and Pantel, P. 2001. DIRT-discovery of inference rules from text. Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Mining.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Dissimilarity Kernels for Paraphrase Identification. Twenty-Fourth International FLAIRS Conference", "authors": [ { "first": "M", "middle": [], "last": "Lintean", "suffix": "" }, { "first": "V", "middle": [], "last": "Rus", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lintean, M. and Rus, V. 2011. Dissimilarity Kernels for Paraphrase Identification. Twenty-Fourth Interna- tional FLAIRS Conference.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The spectrum kernel: a string kernel for SVM protein classification", "authors": [ { "first": "C", "middle": [], "last": "Leslie", "suffix": "" }, { "first": "E", "middle": [], "last": "Eskin", "suffix": "" }, { "first": "W", "middle": [ "S" ], "last": "Noble", "suffix": "" } ], "year": 2002, "venue": "Pacific symposium on biocomputing", "volume": "575", "issue": "", "pages": "564--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leslie, C. , Eskin, E. and Noble, W.S. 2002. The spec- trum kernel: a string kernel for SVM protein classifi- cation. Pacific symposium on biocomputing vol. 575, pp. 564-575, Hawaii, USA.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Fast string kernels using inexact matching for protein sequences", "authors": [ { "first": "C", "middle": [], "last": "Leslie", "suffix": "" }, { "first": "R", "middle": [], "last": "Kuang", "suffix": "" } ], "year": 2004, "venue": "The Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "1435--1455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leslie, C. and Kuang, R. 2004. Fast string kernels using inexact matching for protein sequences. The Journal of Machine Learning Research vol. 5, pp. 1435-1455.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Text classification using string kernels", "authors": [ { "first": "H", "middle": [], "last": "Lodhi", "suffix": "" }, { "first": "C", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "J", "middle": [], "last": "Shawe-Taylor", "suffix": "" }, { "first": "N", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "C", "middle": [], "last": "Watkins", "suffix": "" } ], "year": 2002, "venue": "The Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "419--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lodhi, H. , Saunders, C. , Shawe-Taylor, J. , Cristianini, N. and Watkins, C. 2002. Text classification using string kernels. The Journal of Machine Learning Re- search vol. 2, pp. 419-444.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Modeling semantic containment and exclusion in natural language inference", "authors": [ { "first": "B", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "521--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "MacCartney, B. and Manning, C.D. 2008. Modeling se- mantic containment and exclusion in natural language inference. Proceedings of the 22nd International Con- ference on Computational Linguistics, vol. 1, pp. 521- 528, 2008.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Fast and Effective Kernels for Relational Learning from Texts", "authors": [ { "first": "A", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "F", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th Annual International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moschitti, A. and Zanzotto, F.M. 2007. Fast and Effec- tive Kernels for Relational Learning from Texts. Pro- ceedings of the 24th Annual International Conference on Machine Learning, Corvallis, OR, USA, 2007.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Paraphrase recognition via dissimilarity significance classification", "authors": [ { "first": "L", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "M", "middle": [ "Y" ], "last": "Kan", "suffix": "" }, { "first": "T", "middle": [ "S" ], "last": "Chua", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "18--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiu, L. and Kan, M.Y. and Chua, T.S. 2006. Para- phrase recognition via dissimilarity significance clas- sification. Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pp. 18-26.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Monolingual machine translation for paraphrase generation", "authors": [ { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "C", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "W", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "142--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quirk, C. , Brockett, C. and Dolan, W. 2004. Monolin- gual machine translation for paraphrase generation. Proceedings of EMNLP 2004, pp. 142-149, Barcelona, Spain.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning with kernels: Support vector machines, regularization, optimization, and beyond", "authors": [ { "first": "B", "middle": [], "last": "Sch\u00f6lkopf", "suffix": "" }, { "first": "A", "middle": [ "J" ], "last": "Smola", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00f6lkopf, B. and Smola, A.J. 2002. Learning with kernels: Support vector machines, regularization, op- timization, and beyond. The MIT Press, Cambridge, MA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The nature of statistical learning theory", "authors": [ { "first": "V", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vapnik, V.N. 2000. The nature of statistical learning theory. Springer Verlag.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Using dependency-based features to take the \"Para-farce\" out of paraphrase", "authors": [ { "first": "S", "middle": [], "last": "Wan", "suffix": "" }, { "first": "M", "middle": [], "last": "Dras", "suffix": "" }, { "first": "R", "middle": [], "last": "Dale", "suffix": "" }, { "first": "C", "middle": [], "last": "Paris", "suffix": "" } ], "year": 2006, "venue": "Proc. of the Australasian Language Technology Workshop", "volume": "", "issue": "", "pages": "131--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wan, S. , Dras, M. , Dale, R. and Paris, C. 2006. Using dependency-based features to take the \"Para-farce\" out of paraphrase. Proc. of the Australasian Language Technology Workshop, pp. 131-138.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Shallow semantics in fast textual entailment rule learners", "authors": [ { "first": "F", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" }, { "first": "M", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "A", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing", "volume": "", "issue": "", "pages": "72--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zanzotto, F.M. , Pennacchiotti, M. and Moschitti, A. 2007. Shallow semantics in fast textual entailment rule learners. Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pp. 72-77.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Paraphrase identification by text canonicalization", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Patrick", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Australasian Language Technology Workshop", "volume": "", "issue": "", "pages": "160--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Y. and Patrick, J. 2005. Paraphrase identifica- tion by text canonicalization. Proceedings of the Aus- tralasian Language Technology Workshop, pp. 160- 166.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Example of re-writing. (A) is a re-writing rule and (B) is a re-writing of sentence.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The pairwise k-wildcard kernel (pw-SRK) K pw k is defined as the re-writing rule kernel under", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Example of two k-gram pairs. \u03b1 D = (a, a), (b, b), ( , ), (c, c), (c, c), ( , ), ( , ) \u03b1 D = (c, c), (b, b), (c, c), ( , ), ( , ), (c, c), ( , ) Example of the pair of double lists combined from the two k-gram pairs in Fig. 2. Non-identical doubles are in bold.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "is a bijective alignment} where \u03b2 D s and \u03b2 D t are lists of identical doubles including wildcards and with length k. We say rule r D matches a pair of double lists", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "For re-writing rule (A) matching both k-gram pairs shown in Fig. 2, there is a corresponding re-writing rule for double lists (B) matching the pair of double lists shown in Fig. 3. # \u03a3\u00d7\u03a3 (\u03b1 D ) = {(a, a): 1, (b, b): 1, ( , ): 1, ( , ): 2, (c, c): 2} # \u03a3\u00d7\u03a3 (\u03b1 D ) = {(a, a): 0, (b, b): 1, ( , ): 1, ( , ): 2, (c, c): 3} Example of # \u03a3\u00d7\u03a3 (\u2022) for the two double lists shown in Fig. 3. Doubles not appearing in both \u03b1 D s and \u03b1 D t are not shown.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "any i = j. (3) If e \u2208 I, then a (e)", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "Relation between ratio C/n 2 avg and window size k when running Alg. 2 on MSR Paraphrases Corpus.", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "Performances of different kernels under different maximum window size k max on MSRP.", "uris": null, "type_str": "figure", "num": null }, "FIGREF9": { "text": "Performances of different kernels under different maximum window size k max on RTE-3.", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "html": null, "num": null, "content": "