|
{ |
|
"paper_id": "Y03-1029", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:34:42.124037Z" |
|
}, |
|
"title": "Efficient Methods for Multigram Compound Discovery", |
|
"authors": [ |
|
{ |
|
"first": "Wu", |
|
"middle": [ |
|
"Horng" |
|
], |
|
"last": "Jyh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Mustard Technology Pte Ltd Republic of Singapore", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Ng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Mustard Technology Pte Ltd Republic of Singapore", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ruibin", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National University", |
|
"location": { |
|
"country": "Singapore Republic of Singapore" |
|
} |
|
}, |
|
"email": "gongrb@pmail.ntu.edu.sg" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Multigram language model has become important in Speech Recognition, Natural Language Processing and Information Retrieval. An essential task in multigram language model is to establish a set of significant multigram compounds. In Yamamotto and Church (2001), an 0(NlogN) time complexity method based on Generalised Suffix Array (GSA) has been found, which computes the (term frequency) and df (document frequency) over 0(N) classes of substrings. The ff'and df form the essential statistics on which the metrics, such as MI (Mutual Information) and RIDF (Residual Inverse Document Frequency)', are based for multigram compound discovery. In this paper, it is shown that two related data structures to GSA, Generalised Suffix Tree (GST) and Generalised Directed Acyclic Word Graph (GDAWG) can afford even more efficient methods of multigram compound discovery than GSA. Namely, 0(N) algorithms for computing ff-and df have been found in GST and GDAWG. These data structures also exhibit a series of related, and desirable properties, including an 0(N) time complexity algorithm to classify 0(N2) substrings into 0(N) classes. An experiment based on 6 million bytes of text demonstrates that our theoretical analysis is consistent with the empirical results that can be observed.", |
|
"pdf_parse": { |
|
"paper_id": "Y03-1029", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Multigram language model has become important in Speech Recognition, Natural Language Processing and Information Retrieval. An essential task in multigram language model is to establish a set of significant multigram compounds. In Yamamotto and Church (2001), an 0(NlogN) time complexity method based on Generalised Suffix Array (GSA) has been found, which computes the (term frequency) and df (document frequency) over 0(N) classes of substrings. The ff'and df form the essential statistics on which the metrics, such as MI (Mutual Information) and RIDF (Residual Inverse Document Frequency)', are based for multigram compound discovery. In this paper, it is shown that two related data structures to GSA, Generalised Suffix Tree (GST) and Generalised Directed Acyclic Word Graph (GDAWG) can afford even more efficient methods of multigram compound discovery than GSA. Namely, 0(N) algorithms for computing ff-and df have been found in GST and GDAWG. These data structures also exhibit a series of related, and desirable properties, including an 0(N) time complexity algorithm to classify 0(N2) substrings into 0(N) classes. An experiment based on 6 million bytes of text demonstrates that our theoretical analysis is consistent with the empirical results that can be observed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Multigram language model has become important in Speech Recognition (SR), Natural Language Processing (NLP) and Information Retrieval (IR) as demonstrated in Siu and Ostemdorf (2000) , Peng and Schuurmans (2002) , and Chien (1999) . It has also been used in evaluating NLP applications such as automatic Machine Translation and Text Summarization (Panineni, etc., 2002; Lin and Hovy, 2003) . For a corpus of length N, the computing cost of a na\u00efve algorithm for the frequencies over all substrings is at least 0(N2). In Yamamoto and Church (2001) , an efficient method is given for computing the term frequency (/) and document frequency (di), as well as the Mutual Information (MI) and Residual Inverse Document Frequency (RIDF), for all substrings based on Generalized Suffix Array (GSA). The method groups all N(N+ 1)/2 substrings into up to 2N-I equivalence classes, and in this way, the computation is reduced to a manageable computation over these classes, that is, 0(NlogN) time and 0(N) space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 182, |
|
"text": "Siu and Ostemdorf (2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 211, |
|
"text": "Peng and Schuurmans (2002)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 230, |
|
"text": "Chien (1999)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 369, |
|
"text": "(Panineni, etc., 2002;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 389, |
|
"text": "Lin and Hovy, 2003)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 546, |
|
"text": "Yamamoto and Church (2001)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is natural to compare Generalised Suffix Tree (GST) and Generalised DAWG (GDAWG) with GSA since they all can be viewed as compact representations of suffix tries. Moreover, the construction complexities of GST and GDAWG are 0(N), while that of GSA is 0(NlogN). This raises the question:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "MI and RIDF by Yamamoto and Church (2001) ((a) 2 (a) r (xF) (Yz ) df RIDF (x) -log -+ log(1e ) N r (Y) Where x and z are tokens, Y and xYz are ngrams (sequences of tokens).", |
|
"cite_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 41, |
|
"text": "Yamamoto and Church (2001)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 42, |
|
"end": 46, |
|
"text": "((a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 55, |
|
"end": 59, |
|
"text": "(xF)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 65, |
|
"text": "(Yz )", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 99, |
|
"end": 102, |
|
"text": "(Y)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Are GST and GDAWG the same or more efficient data structures than GSA for multigram compound Discovery?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Crochemore and Rytter (1994) , a set of properties has been identified such that a data structure D is said to be good if:", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 31, |
|
"text": "Crochemore and Rytter (1994)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(Property A) D has linear size. (Property B) D can be constructed in linear time. (Property C) D allows computing FACTORIN(x, text) in 0(1x1) time. Although the above properties are desired for multigram compound discovery, additional properties are required to provide a more precise assessment. Two important basic statistics: #-and df are important. The frequency of a substring in a collection of strings is called the term frequency (or /) , and that of a substring occurred among different strings in the collection is called the document frequency (or dfi. In this paper, the following properties are identified, in addition to Properties A -C, to assess D: let Nbe the size of a set of strings TEXT:", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 131, |
|
"text": "FACTORIN(x, text)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 444, |
|
"text": "(or /)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 555, |
|
"end": 563, |
|
"text": "(or dfi.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(Property D) D allows #-(tenn frequency) and df(document frequency) to be computed in 0(N) time. (Property E) D allows classifying O(N2) multigrams into 0(N) classes with the same tf in 0(N) time. (Property F) D allows, Residual Inverse Document Frequency (RIDF) and Mutual Information (MI) to be computed in 0(N) time. It is self-evident that Property D is a desirable property. Property E reduces the lower bound of Mutual Information computation from 0(N2) to 0(N). Property F represents the ultimate potential for an efficient multigram term discovery algorithm. It is also noted that Properties D, E and F, represent an increasingly tighter criteria; that is, if the earlier, less stringent property is not satisfied, it is impossible for the latter, more stringent property to be satisfied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper proposes two new multigram term discovery algorithms based on GST and GDAWG and proves that they fulfil Property A-E, while the GSA-based method does not satisfy any of the above desirable properties except Property A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Multigram Compound Discovery Methods Based on Generalized Suffix Tree The fact that Suffix Tree has linear size and can be constructed in linear time is well documented in the literature (Ulckonen, 1995) . It is also known that FACTORIN(x, text) can be computed in 0(1x1) time. The algorithm is simply to traverse the Suffix Tree from the root by consuming the string x character by character. If the traversal can be completed for the entire string, then the answer to FACTORIN(x, text) is yes; otherwise, the answer is no. The time taken to decide FACTORIN(x, text) is thus, 0(14). A A GST is an extension to Suffix Tree over the a set of strings, texts, i = 1, n. For the convenience of the discussion, it is assumed that these texts are sorted in alphabetic order. In the following algorithm, we adopt the notion of (Ukonnen, 1995) and describe the algorithm to construct the Generalised Stuffix Tree (GST).", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 203, |
|
"text": "(Ulckonen, 1995)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "GST Construction procedure(texti, ..., text.) { Construct the Generalised Suffix Tree of text,/ , GST (text'); For i 2 ... n do insert (texts, GST (text', texti .1));} insert 4--function (text, GST (text', ..., text~') (texts, GST (text', ..., Where findPrefix will traverse the longest possible prefix of text; contained in GST (texts, text,w) and return the canonical reference pair (ss, ki) for that prefix; procedures update and canonize are the same as those defined in (Ukonnen, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 160, |
|
"text": "(text', texti", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 218, |
|
"text": "GST (text', ..., text~')", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 243, |
|
"text": "(texts, GST (text', ...,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 490, |
|
"text": "(Ukonnen, 1995)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ") { given texts= til ti2 \u2022 \u2022 \u2022 # (s k1) fmdPrefix", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Generalised Suffix Tree as constructed above retains all of the above properties, namely properties A, B, and C. This can be observed quite clearly by the fact that GST is but a union of all the automatas that individually satisfy properties A, B and C.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The GST for two alphabetically sorted strings (1) \"cacacao\" and (2) \"cacao\" is demonstrated in the following Figure 2 .1. Figure 2 .1: A GST for strings: (1) \"cacacao\" (2) \"cacao,\" GST(\"cacacao\", \"cacao\"). Each suffix is associated with a occurrence pair x.y. Fore example, the occurrence pair of \"cacacao\" is 1.1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 117, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 130, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Assuming background understanding of a tree data structure, a few relevant concepts of Generalized Suffix Tree (GST) are recalled in Figure 2 .1: the \"root node,\" denoted as root, is colored in gray. The white nodes are the \"internal,\" or \"branching,\" nodes; s is one such node in Figure 2 .1. The leaf nodes are demonstrated as the black nodes, of which t is an instance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 141, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 289, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A GST tree can also be viewed as an automata where the nodes are the states and the \"labelled\" edges the acceptable input strings. In the following, when the properties are discussed in the , the duality is assumed between a node/state, n, of a tree and a string/prefix 1 that satisfy n = (root, 1). For example, instead of saying t is reachable from s, one may say \"acao\" is reachable from s since t = (root, \"acao\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As demonstrated in Figure 2 .1, each suffix suffix, is associated with an occurrence pair x.y, where x, called the x dimension of the occurrence pair, which is the alphabetic order of the string text1 of which suffix is a suffix; and y, called they dimension of the occurrence pair, is the starting position of suffix; in text/. In Figure 2 .1, the suffix \"acao\" that terminates at a leaf node t has two occurrence pairs 1.4 and 2.2, which specify that \"acao\" starts at the 4 th and Vid positions of the and rd strings of the GST(\"cacacao, \"cacao\"), respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 27, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 340, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In some cases, a branching node can also be a leaf node. For example, in a GST that contains one string \"caca\", the node v = (root, \"ca\") is both an internal node as well as a leaf node, indexed at 1.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We further define suffix index of a suffix as its order in a alphabetically sorted sequence of the suffixes of all the strings in the GST. As demonstrated in Figure 2 .2, the suffix \"acacao\" has the order index of 1 as it is the 1' suffix among all suffixes in the GST(\"cacacao\", \"cacao\"). Lemma la: The f and dfof a suffix, which terminates at a leaf node, of a GST are equal to the numbers of the suffix's occurrence pairs and the distinct x-dimension integers of its occurrence pairs, respectively. The proof of lemma is self-evident. For example, the suffix \"acao\" has a fof 2 and dfof2, since the suffix has 2 occurrence pairs -1.4 and 2.2, and 2 distinct x-dimension integers of the set of occurrence pairs, namely 1 and 2, respectively. 5 Figure 2 .2: The seven distinct suffixes of GST(\"cacacao\", \"cacao\") form a suffix index. A left-open edge <w, v] of the GST has a upper node w and lower node v. The substring \"aca\" that terminates in <w, v] has a counts of 3;2, whose occurrences are underlined in each of the strings in the GST. The domination range of each of the node is demonstrated in the pair [x,y] . Particularly, v has a domination range of [1,2] and w, [1, 3] Lemma lb: Given the set of occurrence pairs associated with the suffixes that are reachable from an (internal) node v, the fa.nd df of the substring that terminates at v are equal to the rank of the set of occurrence pairs and the distinct x-dimension integers of the set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1111, |
|
"end": 1116, |
|
"text": "[x,y]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1174, |
|
"end": 1177, |
|
"text": "[1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1178, |
|
"end": 1180, |
|
"text": "3]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 166, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 754, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Proof: The substring 1 that terminates at an internal node is a prefix of all the suffixes mi that are reachable from the node; that is, each mi = 1.n, Recall that #'ofmi is equal to the frequency of mi in the GST. Since each occurrence of m i will imply an occurrence of 1, ff(1), is equal to summation of (itti) for all i, that is equal to the rank of the set of occurrence pairs of all m i. Similarly, one can arrive that for dfil).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We define an left-open edge, <w,v] of a GST as the edge that contains nodes between the upper node Lemma lc: Each substring that terminates in a node between a left-open edge <w, v] has the same tf and df.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Proof: Recall the property of a suffix tree: all substrings that terminate in an left-open edge of a GST, except the lower node, terminate at an implicit node, which does not branch. Thus, the suffixes reachable from these nodes are the same as the lower node. By Lemma lb, it can be concluded that the tfand df of the substrings, which terminate in the implicit nodes, are the same as those of the substring that terminates at the lower node.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As demonstrated in Figure 2 .2, the substring \"ac\" terminates at an internal node (coloured grey).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 27, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Given that the ,df of \"aca,\" the lower node of the edge <w, v>, is 3;2, the',dfof\"ac,\" is 3;2 as well. In fact, since \"ac\" is a prefix of \"aca,\" it can be shown in each of the underlined occurrences of \"aca,\" there is an occurrence of \"ac,\" which in consistent with Lemma lc. The domination range of a left-open edge in a GST is defined as a pair of suffix indices, [x, y] , where x is the minimum suffix index of those suffixes that the lower node of the left-open edge dominates, while y is the maximum. For example, in Figure 2 .3, it is demonstrated the domination range of the node v is [1, 2] , this is because the subtree dominated by v has two leaf nodes whose suffix indices are 1 and 2, respectively. It is noted that left-open edges associated with all leaf nodes has a trivial domination range where the two suffix indices in the domination range are the same, such as [1,1] , [2,2], ..., and [7, 7] . Each domination range also has a representative, which is the longest substring that terminates at the lower node of the edge. These are demonstrated in Figure 2 .3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 372, |
|
"text": "[x, y]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 595, |
|
"text": "[1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 598, |
|
"text": "2]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 908, |
|
"text": "[7,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 909, |
|
"end": 911, |
|
"text": "7]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 530, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1067, |
|
"end": 1075, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Theorem 1: The classes of distinct domination ranges of a GST form a partition of all substrings of the GST, where each substring in a domination range class has the same tfand df.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Proof: Proof of the latter part of Theorem 1 follows from Lemma lc; the former part follows from the fact that all substrings terminate in one and only one left-open edge that defines one domination range.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Corollary 1: There are 0(N) of distinct domination ranges of a GST and it takes 0(N) time to classify all of the substrings according to its domination range.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Proof: The fact that there are 0(N) number of left-open edges in a GST proves the first part of Corollary 1. The second part follows by the fact that there exists 0(N) algorithms to construct the GST and once the construction of a GST is finished the edge and the partition based on domination range is completed at the same with the edges constructed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The discussion in section 2.1 explain Property E can be achieved by defining domination range which classify the substrings in a GST into 0(N) classes in 0(N) time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term frequency (tf) and document frequency (df) of a domination range class", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The counting of tf and df is performed at the end of each insertion step of a string textk in constructing GST(text1, text2 ..., text.) . It performs a bottom-up traversal of the boundary path, the path followed by the suffix links starting at the longest suffix of textk. It also keeps a stack, storing the parents of leaf nodes in the boundary path and for checking which category the nodes of concern belong to: among pure leaf nodes, pure internal nodes or leaf-cum-internal nodes. The above algorithm can be completed in 0(N) + 0(sizeof(stack)) time. Since the sizeof(stack) is proportional to the number of internal nodes of a Suffix Tree, it is known to be 0(N). Thus the above algorithm to update g' and df will take 0(N) time altogether.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 117, |
|
"text": "GST(text1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 128, |
|
"text": "text2 ...,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 129, |
|
"end": 135, |
|
"text": "text.)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm for counting tf and df", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The proof of Property F for GST is achieved by considering the following formulae: given a substring w = xyz, where x, xy, xyz are the longest substrings in their respective classes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation of MI and RIDF for multigram compound discovery", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": ") ( I (w) X (Y) RID F (w) -log df (w) + log(1 e Mw = log2 (x31)x*(i'z)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation of MI and RIDF for multigram compound discovery", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The above formulae can be computed in constant time, since each of the involved in the formula can be accessed from root by traversing one of the substrings: w, y, xy and yz. Since there are 0(N) classes of a substring like w, the computation of MI and RIDF is achieved in 0(N) time. This concludes the description of a proof to Property E. Figure 3 .1: GDAWG for strings \"cacao\" and \"cacacao\". Figure 3 .1 shows the GDAWG constructed using the strings \"cacao\" and \"cacacao\" . In this Section 3.1, we provide a detailed analysis of the algorithm that constructs a GDAWG using a set of input strings S. In Section 3.2, we describe how we calculate term (0 and document frequencies (df) in linear time. Term frequency is the number of times where a substring occurs in a corpus. Document frequency is the number of unique strings where a substring occurs. They are required in multigram compound discovery. However, in order to obtain a frequency counting algorithm that runs in linear time, we update the ff'and dfin each state as the GDAWG is being constructed. This update is based on recurring prefixes of substrings in S. In addition, we store the last string identity (SID) in each state to denote the last string with which the state's dfis updated. This is to aid the computation of df s.W e show the steps for these updating in Section 3.1, together with the algorithm for constructing GDAWG.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 349, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 403, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Computation of MI and RIDF for multigram compound discovery", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "tf (w) D ................M.N.N.M.M.M...................................... ........... ................. 0 ....... --- a ''", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computation of MI and RIDF for multigram compound discovery", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We describe the differences between our algorithm and Algorithm A of Blumer et al. (1985) in the following sub-sections.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 89, |
|
"text": "Blumer et al. (1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The current sink is reset to be the source of the GDAWG when a new string in S is about to be processed. The new builddawg algorithm that takes S= {so, sh s2,..., sn_i} as input is presented below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Resetting Current Sink to Source", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "builddawg 4-functions(S){ Create a state named source and let currentsink be source. for sj <-so, si, s2,..., s\"..4 do Let currentsink be source; For each symbol a of si do currentsink f-update(currentsink, a); Return source.} Figure 3 .2 (a) to (b) gives a snapshot of resetting the currentsink to source.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 115, |
|
"text": "<-so, si, s2,..., s\"..4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 235, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Resetting Current Sink to Source", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "Assume the symbol currently being scanned is a. We need to check the current sink whether there is already an outgoing edge labelled a before we create a state named new-sink. If an outgoing edge labelled a has been created previously in the current sink, further processing would depend on whether the edge is a secondary edge. In this case, the next state where this edge leads to must be split using the same splitting function presented in Algorithm A of Blumer et al. (1985) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 479, |
|
"text": "Blumer et al. (1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Check for Existing Outgoing Edge and Update Frequencies", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "If an outgoing edge labelled a has been created previously in the current sink and it is a primary edge, assume s' is the next state of the primary edge, we increase tf ,z)f s' by 1. This implies that the current symbol has contributed one more ffscount to the strings represented by s'. For the df count in s', we increase it by 1 and update the SID of s' to j. However, we do this only if SID of s' is less than j. This is because that the current string contributes to one more df count to those strings represented by s'. Recall that in the bui lddawg function, j is the index of the current string being processed. After this, assume w is the prefix of sj processed thus far, we loop through the states containing the successively shorter suffixes of w to increase the df s by 1 and update the SID's to j. This loop terminated when we fmd a state with SID equals to j. The above processes can be seen in Figure 3 .2 (b) to (c). Figure 3 .2 (b) shows the GDAWG after \"cacao\" has been scanned. Figure 3 .2 (c) shows the GDAWG after \"caca\" has been added. Notice that f s of states 1 to 4 has been increased by 1 in Figure 3 .2 (c). Their df s are also increased by 1 since their SID's are all less than j = 1, which represents the second string. After that, their SID's are updated to be the current value/ If an outgoing edge labelled a has not been created previously, we follow the steps in the update function of Algorithm A (Blumer et al., 1985) . After that, we initialize the f and SID of the newly created newsink to 0 and j respectively. In addition, we increase the ff'of the suffix state by 1 if the suffix state is not the source, and the edge that is followed to reach the suffix state is primary, and there are currently two suffix pointers pointing to it, one of which is added recently. As mentioned in the beginning of this paragraph, the initial ff-count is implicitly represented in each state during the state creation. This implicit count can be stored in the ff'of the state either now or during the final update of f s. We choose to make it explicit now so that for states with more than one child states, we simply sum up the fs of the children plus any additional ff'counts contributed by recurring strings during the final update of f s. This process is shown in Figure 3 .3. Note that #. of state 1 has been increased by one. Following that, we set the SID of the suffix state to j if the STD of the suffix state is less than j. This implies that the dfcount represented in newsink will contribute to one dfcount to the suffix state through the suffix pointer (Figure 3 .3 -state 5 will contribute one df count to state 1 during our algorithm presented in Section 3.2). If the SID of the suffix state is j, and there are currently more than one suffix pointers pointing to it, we decrease df of the suffix state by one. This implies that the df contributed by sj has already been taken cared of by the other suffix pointer. This is shown in Error! Reference source not found. (a) to (b). Note that SID of source in (a) is j = 0. In (b), df of source is decreased by one because there are two suffix pointers that contribute to the dfcount of sj. One from state 1 and the other one from state 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1432, |
|
"end": 1453, |
|
"text": "(Blumer et al., 1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 909, |
|
"end": 917, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 941, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 997, |
|
"end": 1005, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1118, |
|
"end": 1126, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2292, |
|
"end": 2300, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2590, |
|
"end": 2599, |
|
"text": "(Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Check for Existing Outgoing Edge and Update Frequencies", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "The update algorithm that takes currentsink and a as inputs is presented below. Figure 3 .3: Process of scanning 'a' after {aab, b}. In Figure 3 .4: Process of adding s i=\"b\" to GDAWGab state 1, Os increased by 1 and SID is set to Blumer et al. (1985) except the following.) #newsink 4--0; SID newsink (from sj); When the currentstate has a primary outgoing edge labelled a while traversing the successively shorter suffixes, set edgetype to true; if suffixstate is not source and edgetype is true, and there are currently 2 suffix pointers pointing to it, do ffsuifirsk,\" #s sur tate + 1; if SID <I, do SID,,dixstate -j; else if there are more than 1 suffix pointers pointing at the sulfixstate, do Reduce dfseixskne by 1; Return newsink;}", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 251, |
|
"text": "Blumer et al. (1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 88, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 144, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(a)GDAWG, (b)GDAWG,", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As in update function, we need to update the frequencies during a split operation. After the split operation as presented in Algorithm A of Blumer et al. (1985) is performed, we increase dfof suffix state of new child state by one and set its SID to j if the SID is less than j, i.e.,", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 160, |
|
"text": "Blumer et al. (1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Update Document Frequencies during a Split", |
|
"sec_num": "3.13" |
|
}, |
|
{ |
|
"text": "Let sujfixstate be the suffix state of the newchildstate. if SID suirixstate < j, do df = dfs\"r\",s,\"fe + 1;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Update Document Frequencies during a Split", |
|
"sec_num": "3.13" |
|
}, |
|
{ |
|
"text": "SID suffixstate f;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Update Document Frequencies during a Split", |
|
"sec_num": "3.13" |
|
}, |
|
{ |
|
"text": "During a split operation, the suffix pointer pointing from child state to the suffix state is changed so that it points from new child state to the same suffix state. Thus, the above update signifies that the strings represented by the new child state contribute to one more df count to the suffix state. This is shown in Figure 3 .4. Note that SIDsource is 0 in (a). So, dfsolure is increased by 1 and SIDsource is set to j = 1. In addition, the new child state will be the suffix state of the child state. Thus, we increase the df count at the new child state by one if the SID of the child state is less than j. This implies that, in addition to the df count contributed by the child state, the current string si contributes to one df count at the new child state too, i.e.,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 330, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Update Document Frequencies during a Split", |
|
"sec_num": "3.13" |
|
}, |
|
{ |
|
"text": "if This is shown in Figure 3 .4 (b). Note that state 4 is the new suffix state of state 2 and dfstate4 is increased by one from the initial value of zero.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 28, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Update Document Frequencies during a Split", |
|
"sec_num": "3.13" |
|
}, |
|
{ |
|
"text": "The extra for loop in our builddawg function is simply used to loop through all the strings in S. Thus, it does not create more complexity to the original DAWG construction algorithm. The only extra loop is in our update function. It is used to update the SID's of the successive suffix states of newsink and increase their df s by following the suffix link that begins from the newsink.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "Our corpus for multigram compound discovery contains 146,844 strings and 5,863,591 symbols. The minimum, average and maximum string lengths are 3, 39.93 and 138 symbols respectively. Due to this, we think the extra loop will not increase the complexity of the algorithm. This is supported by our experiment results where the shortest, average and longest suffix link following during the GDAWG construction are 0, 5.39 and 24 respectively. In addition, the time grows linearly with our corpus size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "Thus, our algorithm to construct a GDAWG based on S is online in linear space and time, and the resulting GDAWG allows the computation of FACTORIN(x, TEX7) in 0(1x1) time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "3.2 Final Update of Term and Document Frequencies After the processing described in Section 3.1, y's in the GDAWG represent the counts contributed by the recurring prefixes in the corpus and non-unique first symbols; and df s represent the offsets that should be added to the number suffix pointers pointing to it in order to compute the correct final df s. To compute the final ff'and df s, we do a depth-first traversal on the GDAWG. During the traversal, tfand df at the leave nodes are increased by one in order to count the initial occurrence of the strings implicitly represented by these leave nodes. For non-leave nodes, the final g' and df are simply the addition of the original counts and the Ys and df contributed by the child states. In addition, for states with only one child, we need to increase its tfcount by one in order to take care of the initial string occurrence that causes the creation of the state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "The updateFreq function that takes in the source of the GDAWG is presented below. updateFreq 4-function(source of GDAWG) if the state is a leave node, do Increase tf of the state by 1; Increase df of the state by 1;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "Return , else (the state is not a leave node) for each child of state, do updateFreq(child); df df + 1; if the state has only one child state, do 1; Return #',} As shown above, the algorithm to perform the final update of tfand df s is linear.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combinatorial Analysis of the Algorithm for Constructing GDAWG", |
|
"sec_num": "3.1.4" |
|
}, |
|
{ |
|
"text": "As in DAWG, each state in the resulted GDAWG represents a class of substrings that are end-equivalent. Since the number of resulted states is linear, i.e., N = ITEX11 (Blumer et al., 1985) , there are at most 2N-1 classes of substrings in GDAWG. Since the GDAWG construction algorithm is linear, the number of classes represented by GDAWG and the time to find the classes is linear. (In the following, we use class and state interchangeably.) Thus, the two parameters used in our multigram compound discovery, i.e., MI and RIDF (as shown in Section 2.3) for the longest substring in each class, can be computed in linear time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 188, |
|
"text": "(Blumer et al., 1985)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multigram Compound Discovery Based on GDAWG", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Here, yz represents the longest suffix of w. It's either in the same class as w or in the class pointed to by the suffix pointer of the class containing w. \"xy\" can be accessed by keeping a parent pointer during the traversal of GDAWG. Thus, all required parameters for the above formula can be accessed in constant time. We just need to traverse the entire GDAWG to compute the MI and RIDF of the longest substring in each class.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multigram Compound Discovery Based on GDAWG", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Multigram Compound Discovery Methods based on Generalized Suffix Array Suffix Array (SA) is an array of all N suffixes of a given corpus, sorted alphabetically. It was introduced as a new and conceptually simple data structure for online string searching by Manber and Myers (1990) . SA allows computing the membership function, FACTORIN(x, text) in Odxl+logitext1) time. SA can be constructed in O(NlogN) time2. These results hold for GSA. The major advantage of GSA over GST is space. The space requirements for GST grow with the alphabet size IA: 0(N L1) space. The dependency on alphabet size could be a serious issue for many cases, e.g., some Asia languages, such as Chinese, have a relatively large alphabet of more than 6,000 characters. So the advantages of Suffix Arrays over Suffix Trees becomes much significant for larger alphabets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 281, |
|
"text": "Manber and Myers (1990)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Detailed techniques of using GSA to compute tfand df for all substrings in a corpus were given in Yamamoto and Church (2001) . The main idea is to group all N(N+1)/2 substrings into a manageable number, i.e. up to 2N-1, of equivalence classes, and the substrings in a class all share the same tf and df. In this way, the computation over substrings is reduced to a manageable computation over classes, that is, 0(NlogN) time and 0(N) space. This implies Property D and E do not hold for GSA.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 124, |
|
"text": "Yamamoto and Church (2001)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Yamamoto and Church (2001) , MI and RIDF were computed for the longest substring in each non-trivial class (up to N-1). The time required is 0(NlogA) as each of the terms in the formula will require 0(logIV) time to access. This means GSA does not have Property F either.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 29, |
|
"text": "Yamamoto and Church (2001)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Experiment Result Figure 5 .1: GST/GDAWG/GST 2 Manber and Myers (1990) also gave a augmented algorithm that, regardless of the alphabet size, constructs Suffix Array in 0(N) expected time, albeit with lesser space efficiency. It also reported that Suffix Arrays use three to five times less space than Suffix Trees even in the case of relatively small alphabet size (1.4=96) in practice.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 70, |
|
"text": "Manber and Myers (1990)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To evaluate the performance in a real world application, we run our algorithms together with the one based on GSA over our Philippine address dataset, which consists of 6 million bytes data and 146,844 address records and has a small alphabet set (< 128) and record size (< 1K bytes). Figure 5 .1 shows the experiment result measured by Relative Time, which takes the processing time over unit test data (500K Bytes) as the time unit. Obviously, the time cost of our algorithms grow in linear with the data size, that is, in 0(N), which coincides with the theoretical analysis in Section 2 and 3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 293, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper discusses the multigram compound discovery methods based on GST, GDAWG and GSA. A set of properties (A to F) is defined to access the efficiency of the algorithms. This paper proposes two new algorithms based on GST and GDAWG, and proves that they are able to fulfil Property A to F (detailed comparisons with GSA are shown in Table 6 .1). Thus, they are efficient algorithms for multigram compound discovery. An experiment based 6 million bytes of text demonstrate that our theoretical analysis is consistent with the em irical results. Table 6 .1: GST/GDAWG/GSA(Given the size of the set of strings I TEX71 is /V)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 345, |
|
"text": "Table 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 556, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The smallest automaton recognizing the subwords of a text", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Blumer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Blumer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Haussler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ehrenfeucht", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-T", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Seiferas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "Theoretical Computer Science", |
|
"volume": "40", |
|
"issue": "", |
|
"pages": "31--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blumer, A,J. Blumer, D. Haussler, A. Ehrenfeucht, M.-T. Chen and J. Seiferas. 1985. The smallest automaton recognizing the subwords of a text. Theoretical Computer Science, 40:31-55.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "PAT-Tree-Based Adaptive Keyphrase Extraction for Intelligent Chinese Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"L F" |
|
], |
|
"last": "Chien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Information Processing and Management", |
|
"volume": "35", |
|
"issue": "4", |
|
"pages": "501--521", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chien. L. F. 1999. PAT-Tree-Based Adaptive Keyphrase Extraction for Intelligent Chinese Information Retrieval, Information Processing and Management, 35(4):501-521.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Text Algorithm", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Crochemore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Rytter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Crochemore, M and Rytter, W. 1994. Text Algorithm. Oxford University Press, New York & Oxford.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Automatic Evaluation of Summaries Using N-gram Co-Occurrence Statistics", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Edmonton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Manber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Myers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "the first Annual ACM-SIAM Symposium on Discrete Algorithms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, C. Y., Hovy, E. 2003. Automatic Evaluation of Summaries Using N-gram Co-Occurrence Statistics, Proceedings of the Human Technology Conference 2003 (HLT-NAACL-2003), Edmonton, Manber, U. and Myers, G. 1990. Suffix arrays: A new method for on-line string searches. In the first Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 319-327.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Panineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thu", |
|
"middle": [ |
|
"W J" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Panineni, K. Roukos, S. Ward, T., and Thu W.J. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, pp. 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A Hierarchical EM Approach to Word Segmentation, Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium (NLPRS", |
|
"authors": [ |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Schuurmans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "475--480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng. F and Schuurmans, D. 2001. A Hierarchical EM Approach to Word Segmentation, Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium (NLPRS 2001). Pp. 475-480", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Variable N-grams and Extension for Conversational Speech Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Siu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "63--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siu, M. and Ostendorf, M. 2000. Variable N-grams and Extension for Conversational Speech Language Modeling. IEEE Transactions on Speech and Audio Processing, 8(1):63-75.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "On-line Construction of Suffix Trees", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Ukkonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Algoritmica", |
|
"volume": "14", |
|
"issue": "3", |
|
"pages": "249--260", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ukkonen, E. 1995. On-line Construction of Suffix Trees. Algoritmica 14(3):249-260", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Using Suffix Arrays to compute Term Frequency and Document Frequency for All Substrings in a Corpus", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Computational Linguistics", |
|
"volume": "27", |
|
"issue": "1", |
|
"pages": "1--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yamamoto, M. and Church, K. 2001. Using Suffix Arrays to compute Term Frequency and Document Frequency for All Substrings in a Corpus, Computational Linguistics, vol 27:1, pp. 1-30, MIT Press.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "(w) and lower node (v) of an edge, it is left-open because it does not include the upper node w, while it does contain the lower node v." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "3. The 11 domination ranges, of left-open edges of GST(\"cacacao\", \"cacao\"). The highlighted <1,2> domination range has the y',df counts of 3;2." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Anode') &ode') + 1; d.Anodei) \u00f7-df(nodei) + 1; delta Aparent(node')) 4+; dAparent(node0) ++; push_s tack (parent(node,), depth(parent(node i))); } internal_or_leaf function(nodei, node) { if nodei -nodei, do return pure_internal; else return leaf_cum internal; }" |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Process of constructing the GDAWG that represent all substrings inS = {cacao, caca}." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>MI (At ) bg</td><td>P (xit )</td><td>N</td><td>f ( xlt ) x rj (Y )</td></tr><tr><td/><td>P(xY ) x P(.111)</td><td/><td/></tr></table>", |
|
"html": null, |
|
"text": "are given below:" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Primartransition edge</td></tr><tr><td>Secondary transition edge</td></tr><tr><td>----* Suffi -x pointer</td></tr></table>", |
|
"html": null, |
|
"text": "................................ 6 {cacao, acac} ......'. .. 3 Multigram Compound Discovery Methods Based on Generalized Directed Acyclic Word Graph ....___\u2022,," |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>1.</td></tr><tr><td>update function(currentsink, a){</td></tr><tr><td>Let newsink be the state pointed to by an existing a-labelled outgoing edge of currentsink. if newsink is defined, do</td></tr><tr><td>if the existing a-labelled outgoing edge of currentsink is a secondary edge, do</td></tr><tr><td>newsink split(currentsink, newsink);</td></tr><tr><td>else (the existing a-labelled outgoing edge of currentsink is a primary edge)</td></tr><tr><td>#% newsink E #newsink + 1;</td></tr><tr><td>if</td></tr></table>", |
|
"html": null, |
|
"text": "SID newsink <j, do dfnewsink = dfnewsink + 1; SID newsink f; Let currentstate be the state pointed to by the suffix pointer of newsink. while SID currentstate <j, do dfcurrentstate = dfcurrentstate + 1; SID currentskrte =j;Let currentstate be the state pointed to by the suffix pointer of currentstate." |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>SID newhimstate 4-f;</td></tr></table>", |
|
"html": null, |
|
"text": "SIDchildstate <1, do d fnewchildstate = d fnewchildstate + 1;" |
|
} |
|
} |
|
} |
|
} |