{ "paper_id": "J97-4004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:43:37.000642Z" }, "title": "Critical Tokenization and its Properties", "authors": [ { "first": "Jin", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "guojin@iss.nns.sg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Tokenization is the process of mapping sentences from character strings into strings of words. This paper sets out to study critical tokenization, a distinctive type of tokenization following the principle of maximum tokenization. The objective in this paper is to develop its mathematical description and understanding. The main results are as follows: (1) Critical points are all and only unambiguous toke~ boundaries for any character string on a complete dictionary; (2) Any critically tokenized word string is a minimal element in the partially ordered set of all tokenized word strings with respect to the word string cover relation; (3) Any tokenized string can be reproduced from a critically tokenized word string but not vice versa; (4) Critical tokenization forms the sound mathematical foundation for categorizing tokenization ambiguity into critical and hidden types, a precise mathematical understanding of conventional concepts like combinational and overlapping ambiguities; (5) Many important maximum tokenization variations, such as forward and backward maximum matching and shortest tokenization, are all true subclasses of critical tokenization. It is believed that critical tokenization provides a precise mathematical description of the principle of maximum tokenization. Important implications and practical applications of critical tokenization in effective ambiguity resolution and in efficient tokenization implementation are also carefully examined.", "pdf_parse": { "paper_id": "J97-4004", "_pdf_hash": "", "abstract": [ { "text": "Tokenization is the process of mapping sentences from character strings into strings of words. This paper sets out to study critical tokenization, a distinctive type of tokenization following the principle of maximum tokenization. The objective in this paper is to develop its mathematical description and understanding. The main results are as follows: (1) Critical points are all and only unambiguous toke~ boundaries for any character string on a complete dictionary; (2) Any critically tokenized word string is a minimal element in the partially ordered set of all tokenized word strings with respect to the word string cover relation; (3) Any tokenized string can be reproduced from a critically tokenized word string but not vice versa; (4) Critical tokenization forms the sound mathematical foundation for categorizing tokenization ambiguity into critical and hidden types, a precise mathematical understanding of conventional concepts like combinational and overlapping ambiguities; (5) Many important maximum tokenization variations, such as forward and backward maximum matching and shortest tokenization, are all true subclasses of critical tokenization. It is believed that critical tokenization provides a precise mathematical description of the principle of maximum tokenization. Important implications and practical applications of critical tokenization in effective ambiguity resolution and in efficient tokenization implementation are also carefully examined.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Words, and tokens in general, are the primary building blocks in almost all linguistic theories (e.g., Gazdar, Klein, Pullum, and Sag 1985; Hudson 1984) and language processing systems (e.g., Allen 1995; Grosz, Jones, and Webber 1986) . Sentence, or string, tokenization, the process of mapping sentences from character strings to strings of words, is the initial step in natural language processing (Webster and Kit 1992) .", "cite_spans": [ { "start": 103, "end": 139, "text": "Gazdar, Klein, Pullum, and Sag 1985;", "ref_id": "BIBREF11" }, { "start": 140, "end": 152, "text": "Hudson 1984)", "ref_id": "BIBREF18" }, { "start": 192, "end": 203, "text": "Allen 1995;", "ref_id": "BIBREF2" }, { "start": 204, "end": 234, "text": "Grosz, Jones, and Webber 1986)", "ref_id": "BIBREF12" }, { "start": 400, "end": 422, "text": "(Webster and Kit 1992)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Since in written Chinese there is no explicit word delimiter (equivalent to the blank space in written English), the problem of Chinese sentence tokenization has been the focus of considerable research efforts, and significant advancements have been made (e.g., Bai 1995; Zhang et al. 1994; Chen and Liu 1992; Chiang et al. 1992 ; Fan and Tsai 1988; Gan 1995; Gan, Palmer, and Lua 1996; Guo 1993; He, Xu, and Sun 1991; Huang 1989; Huang and Xia 1996; Jie 1989; Liang 1991a, 1991b; Jin and Chen 1995; Lai et al. 1992; Li et al. 1995; Liang 1986 Liang , 1987 Liang , 1990 Liu 1986a Liu , 1986b Liu, Tan, and Shen 1994; Lua 1990 Lua , 1994 Lua , and 1995 Ma 1996; Nie, Jin, and Hannan 1994; Sproat and Shih 1990; Sproat et al. 1996; Sun and T'sou 1995; Sun and Huang 1996; Tung and Lee 1994; Wang, Su, and Mo 1990; Wang 1989; Wang, Wang, and Bai 1991; Wong et al. 1994; Wu et al. 1994; Wu and Su 1993; Yao, Zhang, and Wu 1990; Yeh and Lee 1991; Zhang, Chen, and Chen 1991) .", "cite_spans": [ { "start": 262, "end": 271, "text": "Bai 1995;", "ref_id": "BIBREF3" }, { "start": 272, "end": 290, "text": "Zhang et al. 1994;", "ref_id": "BIBREF61" }, { "start": 291, "end": 309, "text": "Chen and Liu 1992;", "ref_id": "BIBREF4" }, { "start": 310, "end": 328, "text": "Chiang et al. 1992", "ref_id": "BIBREF6" }, { "start": 331, "end": 349, "text": "Fan and Tsai 1988;", "ref_id": "BIBREF7" }, { "start": 350, "end": 359, "text": "Gan 1995;", "ref_id": "BIBREF8" }, { "start": 360, "end": 386, "text": "Gan, Palmer, and Lua 1996;", "ref_id": "BIBREF9" }, { "start": 387, "end": 396, "text": "Guo 1993;", "ref_id": "BIBREF13" }, { "start": 397, "end": 418, "text": "He, Xu, and Sun 1991;", "ref_id": "BIBREF15" }, { "start": 419, "end": 430, "text": "Huang 1989;", "ref_id": "BIBREF16" }, { "start": 431, "end": 450, "text": "Huang and Xia 1996;", "ref_id": "BIBREF17" }, { "start": 451, "end": 460, "text": "Jie 1989;", "ref_id": "BIBREF19" }, { "start": 461, "end": 480, "text": "Liang 1991a, 1991b;", "ref_id": null }, { "start": 481, "end": 499, "text": "Jin and Chen 1995;", "ref_id": "BIBREF22" }, { "start": 500, "end": 516, "text": "Lai et al. 1992;", "ref_id": "BIBREF24" }, { "start": 517, "end": 532, "text": "Li et al. 1995;", "ref_id": "BIBREF25" }, { "start": 533, "end": 543, "text": "Liang 1986", "ref_id": "BIBREF26" }, { "start": 544, "end": 556, "text": "Liang , 1987", "ref_id": "BIBREF28" }, { "start": 557, "end": 569, "text": "Liang , 1990", "ref_id": "BIBREF29" }, { "start": 570, "end": 579, "text": "Liu 1986a", "ref_id": "BIBREF30" }, { "start": 580, "end": 591, "text": "Liu , 1986b", "ref_id": "BIBREF31" }, { "start": 592, "end": 616, "text": "Liu, Tan, and Shen 1994;", "ref_id": "BIBREF33" }, { "start": 617, "end": 625, "text": "Lua 1990", "ref_id": "BIBREF34" }, { "start": 626, "end": 636, "text": "Lua , 1994", "ref_id": "BIBREF35" }, { "start": 637, "end": 651, "text": "Lua , and 1995", "ref_id": "BIBREF36" }, { "start": 652, "end": 660, "text": "Ma 1996;", "ref_id": "BIBREF37" }, { "start": 661, "end": 687, "text": "Nie, Jin, and Hannan 1994;", "ref_id": "BIBREF38" }, { "start": 688, "end": 709, "text": "Sproat and Shih 1990;", "ref_id": "BIBREF42" }, { "start": 710, "end": 729, "text": "Sproat et al. 1996;", "ref_id": "BIBREF43" }, { "start": 730, "end": 749, "text": "Sun and T'sou 1995;", "ref_id": "BIBREF45" }, { "start": 750, "end": 769, "text": "Sun and Huang 1996;", "ref_id": "BIBREF44" }, { "start": 770, "end": 788, "text": "Tung and Lee 1994;", "ref_id": "BIBREF46" }, { "start": 789, "end": 811, "text": "Wang, Su, and Mo 1990;", "ref_id": "BIBREF47" }, { "start": 812, "end": 822, "text": "Wang 1989;", "ref_id": "BIBREF48" }, { "start": 823, "end": 848, "text": "Wang, Wang, and Bai 1991;", "ref_id": "BIBREF49" }, { "start": 849, "end": 866, "text": "Wong et al. 1994;", "ref_id": "BIBREF52" }, { "start": 867, "end": 882, "text": "Wu et al. 1994;", "ref_id": "BIBREF53" }, { "start": 883, "end": 898, "text": "Wu and Su 1993;", "ref_id": "BIBREF54" }, { "start": 899, "end": 923, "text": "Yao, Zhang, and Wu 1990;", "ref_id": "BIBREF55" }, { "start": 924, "end": 941, "text": "Yeh and Lee 1991;", "ref_id": "BIBREF56" }, { "start": 942, "end": 969, "text": "Zhang, Chen, and Chen 1991)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The tokenization problem exists in almost all natural languages, including Japanese (Yosiyuki, Takenobu, and Hozumi 1992) , Korean (Yun, Lee, and Rim 1995) , German (Pachunke et al. 1992) , and English (Garside, Leech, and Sampson 1987) , in various media, such as continuous speech and cursive handwriting, and in numerous applications, such as translation, recognition, indexing, and proofreading.", "cite_spans": [ { "start": 84, "end": 121, "text": "(Yosiyuki, Takenobu, and Hozumi 1992)", "ref_id": "BIBREF58" }, { "start": 131, "end": 155, "text": "(Yun, Lee, and Rim 1995)", "ref_id": "BIBREF59" }, { "start": 165, "end": 187, "text": "(Pachunke et al. 1992)", "ref_id": "BIBREF40" }, { "start": 202, "end": 236, "text": "(Garside, Leech, and Sampson 1987)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For Chinese, sentence tokenization is still an unsolved problem, which is in part due to its overall complexity but also due to the lack of a good mathematical description and understanding of the problem. The theme in this paper is therefore to develop such a mathematical description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In particular, this paper focuses on critical tokenization 1, a distinctive type of tokenization following the maximum principle. What is to be established in this paper is the notion of critical tokenization itself, together with its precise descriptions and well-proved properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We will prove that critical points are all and only unambiguous token boundaries for any character string on a complete dictionary. We will show that any critically tokenized word string is a minimal element in the partially ordered set of all tokenized word strings on the word string cover relation. We will also show that any tokenized string can be reproduced from a critically tokenized word string but not vice versa. In other words, critical tokenization is the most compact representation of tokenization. In addition, we will show that critical tokenization forms a sound mathematical foundation for categorizing critical ambiguity and hidden ambiguity in tokenizations, which provides a precise mathematical understanding of conventional concepts like combinational and overlapping ambiguities. Moreover, we will confirm that some important maximum tokenization variations, such as forward and backward maximum matching and shortest tokenization, are all subclasses of critical tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Based on a mathematical understanding of tokenization, we reported, in Guo (1997) , a series of interesting findings. For instance, there exists an optimal algorithm that can identify all and only critical points, and thus all unambiguous token boundaries, in time proportional to the input character string length but independent of the size of the tokenization dictionary. Tested on a representative corpus, about 98% of the critical fragments generated are by themselves desired tokens. In other words, about 98% close-dictionary tokenization accuracy can be achieved efficiently without disambiguation.", "cite_spans": [ { "start": 71, "end": 81, "text": "Guo (1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Another interesting finding is that, for those critical fragments with critical ambiguities, by replacing the conventionally adopted meaning preservation criterion with the critical tokenization criterion, disagreements among (human) judges on the acceptability of a tokenization basically become non-existent. Consequently, an objective (human) analysis and annotation of all (critical) tokenizations in a corpus becomes achievable, which in turn leads to some important observations. For instance, we observed from a Chinese corpus of four million morphemes a very strong tendency to have one tokenization per source. Naturally, this observation suggests tokenization disambiguation strategies notably different from the mainstream best-path-finding strategy. For instance, the simple strategy of tokenization by memorization alone could easily exhibit critical ambiguity resolution accuracy of no less than 90%, which is notably higher than what has been achieved in the literature. Moreover, it has been observed that critical tokenization can also provide helpful guidance in identifying hidden ambiguities and in determining unregistered (unknown) tokens (Guo 1997) . While these are just some of the very primitive findings, they are nevertheless promising and motivate 1 All terms mentioned here will be precisely defined later in this paper.", "cite_spans": [ { "start": 1161, "end": 1171, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u2022 570 us to rigorously formalize the tokenization problem and to carefully explore logical consequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows: In Section 2, we formally define the string generation and tokenization operations that form the basis of our framework. In Section 3, we will study tokenization ambiguities and explore the concepts of critical points and critical fragments. In Section 4, we define the word string cover relation and prove it to be a partial order, define critical tokenization as the set of minimal elements of the tokenization partially ordered set, and illustrate the relationship between critical tokeniz~ition and string tokenization. Section 5 discusses the relationship between critical tokenization and various types of tokenization ambiguities, while Section 6 addresses the relationship between critical tokenization and various types of maximum tokenizations. Finally, in Sections 7 and 8, after discussing some helpful implications of critical tokenization in effective tokenization disambiguation and in efficient tokenization implementation, we suggest areas for future research and draw some conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In order to address the topic clearly and accurately, a precise and well-defined formal notation is required. What is used in this paper is primarily from elementary Boolean algebra and Formal Language Theory, which can be found in most graduate-level textbooks on discrete mathematics. This section aims at refreshing several simple terms and conventions that will be applied throughout this paper and at introducing the two new concepts of character string generation and tokenization. For the remaining basic concepts and conventions, we mainly follow Aho and Ullman (1972, Chapter 0, Mathematical Preliminaries) , and Kolman and Busby (1987) .", "cite_spans": [ { "start": 555, "end": 615, "text": "Aho and Ullman (1972, Chapter 0, Mathematical Preliminaries)", "ref_id": null }, { "start": 622, "end": 645, "text": "Kolman and Busby (1987)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Generation and Tokenization", "sec_num": "2." }, { "text": "An alphabet G = {a, b, c .... } is a finite set of symbols. Each symbol in the alphabet is a character. The alphabet size is the number of characters in the alphabet and is denoted IGI. Character strings over an alphabet G are defined 2 in the following manner: .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character, Alphabet, and Character String Definition 1", "sec_num": "2.1" }, { "text": ". e is a character string over G. e is called the empty character string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "If S is a character string over G and a is a character in G, then Sa is a character string over G. S' is a character string over ~ if and only if its being so follows from (1) and (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The length of a character string S is the number of characters in the string and is denoted ISI. A position in a character string is the position after a character in the string. If characters in a character string are indexed from 1 to n, then positions in the string are indexed from 0 to n, with 0 for the position before the first character and n for that after the last character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The set of 26 upper case and 26 lower case English characters forms the English alphabet ~,, = {a,b,...,z,A,B,...,Z}. S = thisishisbook is a character string over the alphabet. Its string length is 13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "In this paper, characters are represented with small characters a, b, c, or their subscript form ak, bk, and Ck. The capital letter S or its expanded form S = cl... cn is used to represent a character string. We let G* denote the set containing all character strings over G including e, and G+ denote the set of all character strings over G but excluding e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1", "sec_num": null }, { "text": "Let alphabet ~, = {a, b, c,...} be a finite set of characters. A dictionary D is a set of character strings over the alphabet G. That is, D = {x,y,z .... } C_ G*. Any element in the dictionary is a word. The dictionary size is the number of words in the dictionary and is denoted IDI. Word strings over a dictionary D are defined in the following manner: .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word, Dictionary, and Word String Definition 2", "sec_num": "2.2" }, { "text": ". v is a word string over D. v is called the empty word string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "If W is a word string over D and w is a word in D, then Ww is a word string over D. W' is a word string over D if and only if its being so follows from (1) and (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The length of a word string W is the number of words in the string and is denoted IW I. We let D* denote the set containing all word strings over D, including v and let D + denote the set of all word strings over D but excluding v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The set D = {this, is, his, book} is a tiny English dictionary from the English alphabet. Both his and book are words over the English alphabet. The dictionary size is 4, i.e., IDI = 4. \"this is his book\" is a word string. Its string length is 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 (cont.)", "sec_num": null }, { "text": "To differentiate between character string and word string, blank spaces are added between words in word strings. For example, \"this is his book\" represents a word string of length 4 (four words concatenated) while thisishisbook consists of a character string of length 13 (13 characters in sequence). Slash / is sometimes used as a (hidden) word delimiter. For instance, this~is~his~book is an equivalent representation to \"this is his book\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 (cont.)", "sec_num": null }, { "text": "Generally, capital letters X, Y, Z, and W, or their expanded forms such as W = wl... win, represent word strings. Small letters x, y, z, and w, or their expanded forms such as w = cl \u2022 .. cn, represent both words as elements in a dictionary and character strings over an alphabet. In other words, they are both w E D and w E G'. The word string made up of the single word w alone is represented by w 1. In cases where context makes it clear, the superscript can be omitted and w is also used for representing the single word string w 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 1 (cont.)", "sec_num": null }, { "text": "Let E = {a,b,c .... } be an alphabet and D = {x,y,z .... } be a dictionary over the alphabet. The character string generation operation G is a mapping G: D* ~ E* defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character String Generation Definition 3", "sec_num": "2.3" }, { "text": "Empty word string v is mapped to empty character string e. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "G(v) = e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "Single word string w 1 is mapped to the character string of the single word. That is, G(w 1) = w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "3. If W is a word string over dictionary D and w is a word in D, then, the word string Ww is mapped to the concatenation of character string G(W) and G(w). That is, G(Ww) = G(W)G(w).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "is said to be the generated character string of the word string W from dictionary D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G(W)", "sec_num": null }, { "text": "Note that the character string generation operation G is a homomorphism (Aho and Ullman 1972, 17) with property G(w I) = w.", "cite_spans": [ { "start": 72, "end": 97, "text": "(Aho and Ullman 1972, 17)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "G(W)", "sec_num": null }, { "text": "Example 1 (cont.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G(W)", "sec_num": null }, { "text": "The character string thisishisbook is the generated character string of the word string \"this is his book\". That is, G(\"this is his book\") = thisishisbook.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "G(W)", "sec_num": null }, { "text": "The character string tokenization operation T is a mapping TD: ~* ---* 2 D* defined as: if S is a character string in G*, then TD(S) is the set of dictionary word strings mapped by the character string generation operation G to the character string S. That is, To(S) = {WIG(W) = S, W E D*}. Any word string W in To(S) is a tokenized word string, or simply a tokenization, of the character string S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character String Tokenization Definition 4", "sec_num": "2.4" }, { "text": "Sometimes the character string tokenization operation is emphasized as the exhaustive tokenization operation or ET operation for short. In addition, the tokenized word string or tokenization is emphasized as the exhaustively tokenized word string or exhaustive tokenization or ET tokenization for short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character String Tokenization Definition 4", "sec_num": "2.4" }, { "text": "Note that the character string tokenization operation TD is the inverse homomorphism (Aho and Ullman 1972, 18) of the character string generation operation G.", "cite_spans": [ { "start": 85, "end": 110, "text": "(Aho and Ullman 1972, 18)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Character String Tokenization Definition 4", "sec_num": "2.4" }, { "text": "Example 1 (cont.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character String Tokenization Definition 4", "sec_num": "2.4" }, { "text": "Given character string thisishisbook, for the tiny English dictionary D = {this, is, his, book}, there is TD(thisishisbook) = {\"this is his book\"}. In other words, the word string \"this is his book\" is the only tokenization over the dictionary D. Given dictionary D' = {th, this, is, his, book}, in which th is also a word, there is TD, (thisishisbook) = {\"th is is his book\", \"this is his book\"}. In other words, the character string has two tokenizations over the dictionary D r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Character String Tokenization Definition 4", "sec_num": "2.4" }, { "text": "For character string fundsand and the tiny English dictionary D = {fund,funds, and, sand}, there is TD(fundsand) -= {\"funds and\", \"fund sand\"}. In other words, both \"funds and\" and \"fund sand\" are tokenizations of character stringfundsand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 2", "sec_num": null }, { "text": "Our intention, in formally defining characters and words, is to establish our mathematical system clearly and accurately. To keep discussion concise, the definitions of elementary concepts such as strings and substrings, although widely used in this paper, will be taken for granted. We limit our basic notion to what has already been defined in Aho and Ullman (1972) and Kolman and Busby (1987) .", "cite_spans": [ { "start": 346, "end": 367, "text": "Aho and Ullman (1972)", "ref_id": "BIBREF1" }, { "start": 372, "end": 395, "text": "Kolman and Busby (1987)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5" }, { "text": "Mathematically, word strings are nothing but symbol strings, with each symbol representing a word in the dictionary. In that sense, the word string definition is redundant as it is already covered by the definition of character string. However, since the relationships between character strings and word strings are very important in this paper, we believe it to be appropriate to list both definitions explicitly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5" }, { "text": "What is new in this section is mathematical definitions for character string generation and tokenization. We consider them fundamental to our mathematical description of the string tokenization problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5" }, { "text": "There are two points worth highlighting here. The first relates to the introduction of the character string generation operation. In the literature, the tokenization problem is normally modeled independently with no connection whatsoever with the character string generation problem. By contrast, we model tokenization and generation as inverse problems to each other. In this way, we establish a well-defined mathematical system consisting of an alphabet, a dictionary, and the (generation) homomorphism (operation) and its inverse defined on the alphabet and dictionary. As will be seen throughout this paper, the establishment of the generation operation renders various types of tokenization problems easy to describe. The generation problem is relatively simple and easy to manage, so any modeling of the tokenization problem as its inverse (that is, as the generation problem) should make it more tractable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5" }, { "text": "The second point is in regard to the tokenization definition. In the literature, the string tokenization operation is normally required to generate a unique tokenized word string. Following such a definition of tokenization, introducing tokenization disambiguation at the very beginning is inevitable. We believe this to be a pitfall that has trapped many researchers. In contrast, we define the character string tokenization operation as the inverse operation (inverse homomorphism) of the character string generation operation (homomorphism). Naturally, the result of the tokenization operation is a set of tokenizations rather than a single word string. Such treatment suggests that we could use the divide-and-conquer problem-solving strategy--to decompose the complex string tokenization problem into several smaller and, hopefully, simpler subproblems. That is the basis of our two-stage, five-step iterative problem-solving strategy for sentence tokenization (Guo 1997) .", "cite_spans": [ { "start": 966, "end": 976, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "2.5" }, { "text": "After clarifying both sentence generation and tokenization operations, we undertake next to further clarify sentence tokenization ambiguities. Among all the concepts to be introduced, critical points and critical fragments are probably two of the most important. We will prove that, for any character string on a complete tokenization dictionary, its critical points are all and only unambiguous token boundaries, and its critical fragments are the longest substrings with all inner positions ambiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Point and Fragment", "sec_num": "3." }, { "text": "Let G be an alphabet, D a dictionary, and S a character string over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ambiguity", "sec_num": "3.1" }, { "text": "The character string S from the alphabet G has tokenization ambiguity on dictionary D, if ]TD(S)] > 1. S has no tokenization ambiguity, if ]TD(S)] = 1. S is ill-formed on dictionary D, if ITD(S)] = 0. A tokenization W C To(S) has tokenization ambiguity, if there exists another tokenization W' E To(S), W' ~ W.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5", "sec_num": null }, { "text": "Example 2 (cont.) Since TD(fundsand) = {\"funds and\", \"fund sand\"}, i.e., ]TD(fundsand)l = 2 > 1, the character string fundsand has tokenization ambiguity. In other words, it is ambiguous in tokenization. Moreover, the tokenization \"funds and\" has tokenization ambiguity since there exists another possible tokenization \"fund sand\" for the same character string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5", "sec_num": null }, { "text": "This definition is quite intuitive. If a character string could be tokenized in multiple ways, it would be ambiguous in tokenization. If a character string could only be tokenized in a unique way, it would have no tokenization ambiguity. If a character string could not be tokenized at all, it would be ill-formed. In this latter case, the dictionary is incomplete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5", "sec_num": null }, { "text": "Intuitively, a position in a character string is ambiguous in toke~zation or is an ambiguous token boundary if it is a token boundary in one tokenization but not in another. Formally, let S = cl... cn be a character string over an alphabet G and let D be a dictionary over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 5", "sec_num": null }, { "text": "Position p has tokenization ambiguity or is an ambiguous token boundary, if there Example 1 (cont.) Given a typical English dictionary and the character string S = thisishisbook, all three positions after character s are unambiguous in tokenization or are unambiguous token boundaries, since all possible tokenizations must take these positions as token boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 6", "sec_num": null }, { "text": "Example 2 (cont.) Given a typical English dictionary and the character string S --fundsand, the position after the middle character s is ambiguous in tokenization or is an ambiguous token boundary since it is a token boundary in tokenization \"funds and\" but not in another tokenization \"fund sand\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 6", "sec_num": null }, { "text": "To avoid ill-formedness in sentence tokenization, we now introduce the concept of a complete tokenization dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Dictionary", "sec_num": "3.2" }, { "text": "A dictionary D over an alphabet ~ is complete if for any character string S from the alphabet, S c ~*, there is ]TD(S)] ~ 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 7", "sec_num": null }, { "text": "That is, for any character string S = c 1 . .. C n from the alphabet, there exists at least one word string W = wl ... Wm with S as its generated character string, G(W) = S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 7", "sec_num": null }, { "text": "A dictionary D over an alphabet G is complete if and only if all the characters in the alphabet are single-character words in the dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1", "sec_num": null }, { "text": "On the one hand, every single character is also a character string (of length 1). To ensure that such a single-character string is being tokenized, the single character must be a word in the dictionary. On the other hand, if all the characters are words in the dictionary, any character string can at least be tokenized as a string of single-character words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[] Theorem I spells out a simple way of making any dictionary complete, which calls for adding all the characters of an alphabet into a dictionary as single-character words. This is referred to as the dictionary completion process. If not specified otherwise, in this paper, when referring to a complete dictionary or tokenization dictionary, we mean the dictionary after the completion process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let S = cl...cn be a character string over the alphabet ~ and let D be a dictionary over the alphabet. In addition, let To(S) be the tokenization set of S on D. Example 1 (cont.) Given a typical English dictionary, there are five critical points in the character string S = thisishisbook. They are 0, 4, 6, 9, and 13. The corresponding four critical fragments are this, is, his, and book.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Point and Fragment", "sec_num": "3.3" }, { "text": "Example 2 (cont.) Given a typical English dictionary, there is no extraordinary critical point in the character string S = fundsand. It is by itself the only critical fragment of this character string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Point and Fragment", "sec_num": "3.3" }, { "text": "Given a complete tokenization dictionary, it is obvious that all single-character critical fragments or, more generally, single-character strings, possess unique tokenization. That is, they possess neither ambiguity nor ill-formedness in tokenization. However, the truth of the statement below (Lemma 1) is less obvious.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Point and Fragment", "sec_num": "3.3" }, { "text": "For a complete tokenization dictionary, all multicharacter critical fragments and all of their inner positions are ambiguous in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "Let S = cl ... Cn, n > 1, be a multicharacter critical fragment. Because the tokenization dictionary is complete, the critical fragment can at least be tokenized as a string of single-character words. On the other hand, because it is a critical fragment, for any position p, 1 _< p < n -1, there must exist a tokenization W = Wt...Wm in TD(S) such that for any index k, 0 G k G m, there is neither G(wl...Wk) = cl...cp nor G(wk+l... win) = cp+l...Cn. As this tokenization differs from the above-mentioned tokenization of the string of single-character words, the critical fragment has at least two different tokenizations and thus has tokenization ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "For any character string on a complete tokenization dictionary, its critical points are all and only unambiguous token boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "By Lemma 1, all positions within critical fragments are ambiguous in tokenization. By Definition 8, critical points are unambiguous in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "For any character string on a complete tokenization dictionary, its critical fragments are the longest substrings with all inner positions ambiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corollary", "sec_num": null }, { "text": "By Theorem 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "In this section, we have described sentence tokenization ambiguity from three different angles: character strings, tokenizations, and individual string positions. The basic idea is conceptually simple: ambiguity exists when there are different means to the same end. For instance, as long as a character string has multiple tokenizations, it is ambiguous. This description of ambiguity is complete. Given a character string and a dictionary, it is always possible to answer deterministically whether or not a string is ambiguous in tokenization. Conceptually, for any character string, by checking every one of its possible substrings in a dictionary, and then by enumerating all valid word concatenations, all word strings with the character string as their generated character string can be produced. Just counting the number of such word strings will provide the answer to whether or not the character string is ambiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "Some researchers question the validity of the complete dictionary assumption. Here we argue that, even in the strictest linguistic sense, there exists no single character that cannot be used as a single-character word in sentences. In any case, any natural language must allow us to directly refer to single characters. For instance, you could say \"character x has many written forms\" or \"the character x in this word can be omitted\" for any character x. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The validity of the complete dictionary assumption can also be justified from an engineering perspective. To ensure a so-called soft landing, any practical application system must be designed so that every input character string can always be tokenized. In other words, a complete dictionary is an operational must. Moreover, without such a complete dictionary, it would not be possible to avoid ill-formedness in sentence tokenization nor to make the generation-tokenization system for character and words closed and complete. Without such definitions of well-formedness, any rigorous formal study would be impossible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The concepts of critical point and critical fragment are fundamental to our sentence tokenization theory. By adopting the complete dictionary assumption, it has been proven that critical points are all and only unambiguous token boundaries while critical fragments are the longest substrings with all inner positions ambiguous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "This is a very strong and significant statement. It provides us with a precise understanding of what and where tokenization ambiguities are. Although the proof itself is easy to follow, the result has nonetheless been a surprise. As demonstrated in Guo (1997) , many researchers have tried but failed to answer the question in such a precise and complete way. Consequently, while they proposed many sophisticated algorithms for the discovery of ambiguity (and certainty), they never were able to arrive at such a concise and complete solution.", "cite_spans": [ { "start": 249, "end": 259, "text": "Guo (1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "As critical points are all and only unambiguous token boundaries, an identification of all of them would allow for a long character string to be broken down into several short but fully ambiguous critical fragments. As shown in Guo (1997) , critical points can be completely identified in linear time. Moreover, in practice, most critical fragments are dictionary tokens by themselves, and the remaining nondictionary fragments are generally very short. In short, the understanding of critical points and fragments will significantly assist us in both efficient tokenization implementation and tokenization ambiguity resolution.", "cite_spans": [ { "start": 228, "end": 238, "text": "Guo (1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "The concepts of critical point and critical fragment are similar to those of segment point and character segment in Wang (1989, 37) , which were defined on a sentence word graph for the purpose of analyzing the computational complexity of his new tokenization algorithm. However, Wang (1989) neither noticed their connection with tokenization ambiguities nor realized the importance of the complete dictionary assumption, and hence failed to demonstrate their crucial role in sentence tokenization.", "cite_spans": [ { "start": 116, "end": 131, "text": "Wang (1989, 37)", "ref_id": null }, { "start": 280, "end": 291, "text": "Wang (1989)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.4" }, { "text": "This section seeks to disclose an important structure of the set of different tokenizations. We will see that different tokenizations can be linked by the cover relationship to form a partially ordered set. Based on that, we will establish the notion of critical tokenization and prove that every tokenization is a subtokenization of a critical tokenization, but no critical tokenization has true supertokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization", "sec_num": "4." }, { "text": "Let X and Y be word strings. X covers Y, or X has a cover relation to Y, denoted X < Y, if for any substring Xs of X, there exists substring Ys of Y, such that IXsl ( IYsl and G(Xs) = G(Ys). If X G Y, then X is called a covering word string of Y, and Y a covered word string of X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "Intuitively, X ~ Y implies ]X] < ]YI. In other words, shorter word strings cover longer word strings. However, an absence of X < Y does not imply the existence of Y <__ X. Some word strings do not cover each other. In other words, shorter word strings do not always cover longer word strings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "Example 1 (cont.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "The word string \"this is his book\" covers the word string \"th is is his book\", but not vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "Example 2 (cont.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "The word strings \"funds and\" and \"fund sand\" do not cover each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cover Relationship Definition 9", "sec_num": "4.1" }, { "text": "Let A and B be sets of word strings. A covers B, or A has a cover relation to B, denoted A ~ B, if for any Y c B, there is X E A, such that X ~ Y. If A ~ B, A is called a covering word string set of B, and B a covered word string set of A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 9 I", "sec_num": null }, { "text": "Given the alphabet G = {a, b, c, d}, dictionary D = {a, b, c, d, ab, be, cd, abe, bed}, and character string S = abed from the alphabet, there is TD ( ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 3", "sec_num": null }, { "text": "The cover relation is transitive, reflexive, and antisymmetric. That is, the cover relation is a (reflexive) partial order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": "Lemma 2, proved in Guo (1997) , reveals that the cover relation is a partial order-a well-defined mathematical structure with good mathematical properties. Consequently, from any textbook on discrete mathematics (Kolman and Busby [1987] , for example), it is known that the tokenization set TD(S), together with the word string cover relation <, forms a partially ordered set, or simply a poser. We shall denote this poset by (TD(S), _<). In case there is no confusion, we may refer to the poset simply as TD(S).", "cite_spans": [ { "start": 19, "end": 29, "text": "Guo (1997)", "ref_id": "BIBREF14" }, { "start": 212, "end": 236, "text": "(Kolman and Busby [1987]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": "In the literature, usually a poset is graphically presented in a Hasse diagram, which is a digraph with vertices representing poset elements and arcs representing direct partial order relations between poset elements. In a Hasse diagram, all connections implied by the partial order's transitive property are eliminated. That is, if X ~ Y and Y < Z, there should be no arc from X to Z. Certain elements in a poset are of special importance for many of the properties and applications of posets. In this paper, we are particularly interested in the minimal elements and least elements. In standard textbooks, they are defined in the following manner: Let (A, <) be a poset. An element a E A is called a minimal element of A if there is no element c E A, c ~ a, such that c < a. An element a E A is called a least element of A if a < x for all x E A. (Kolman and Busby 1987, 195-196) Example 1 (cont.) The word string \"this is his book\" is both the minimal element and the least element of both TD( thisishisbook ) = {\"this is his book\"} and TD, ( thisishisbook ) = {\"th is is his book\", \"this is his book\"}.", "cite_spans": [ { "start": 849, "end": 881, "text": "(Kolman and Busby 1987, 195-196)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": ". }i i i i i i i )i }i i i i i i S}i }i i i i i i i i iiiiiii", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": "i i i l }i i i i i i i i i ; i i i i i i i i i i i i i 25i l i l i i i i i i i i i i !", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": "Example 2 (cont.) The poset TD(fundsand) = {\"funds and\", \"fund sand\"} has both \"funds and\" and \"fund sand\" as its minimal elements, but has no least element. Note that any finite nonempty poset has at least one minimal element. Any poset has at most one least element (Kolman and Busby 1987, 195-198) .", "cite_spans": [ { "start": 268, "end": 300, "text": "(Kolman and Busby 1987, 195-198)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Partially Ordered Set Lemma 2", "sec_num": "4.2" }, { "text": "This section deals with the most important concept---critical tokenization. Let ~ be an alphabet, D a dictionary over the alphabet, and S a character string over the alphabet. In this case, (TD(S), <_) is the poset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization", "sec_num": "4.3" }, { "text": "The character string critical tokenization operation CD is a mapping CD: ~* --~ 2 D\" defined as: for any S in ~*, CD(S) = {W I W is a minimal elementoftheposet (TD(S), G)}. Any word string W in CD(S) is a critically tokenized word string, or simply a critical tokenization, or CT tokenization for short, of the character string S. And CD(S) is the set of critical tokenizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 10", "sec_num": null }, { "text": "In other words, the critical tokenization operation maps any character string to its set of critical tokenizations. A word string is critical if any other word string does not cover it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 10", "sec_num": null }, { "text": "Example 1 (cont.) Given the English alphabet, the tiny Dictionary D = {th, this, is, his, book}, and the character string S = thisishisbook, there is Co(S) = {\"this is his book\"}. This critical tokenization set contains the unique critical tokenization \"this is his book\". Note that the only difference between \"this is his book\" and \"th is is his book\" is that the word this in the former is split into two words th and is in the latter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 10", "sec_num": null }, { "text": "Example 2 (cont.) Given the English alphabet, the tiny Dictionary D = {fund, funds, and, sand}, and the character string S = fundsand, there is CD( S) = {\"funds and\", \"fund sand\"}. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 10", "sec_num": null }, { "text": "Given the English alphabet, the tiny Dictionary D = {the, blue, print, blueprint}, and the character string S = theblueprint, there are To(S) = {\"the blueprint\", \"the blue print\"} and Co(S) = {\"the blueprint\"}. Note that the tokenization \"the blue print\" is not critical (not a critical tokenization).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4", "sec_num": null }, { "text": "Intuitively, a tokenization is a subtokenization of another tokenization if further tokenizing words in the latter can produce the former. Formally, let S be a character string over an alphabet E and let D be a dictionary over the alphabet. In addition, let X = xl ...x, and Y = yl ...Ym be tokenizations of S on D, X, Y c TD(S). That gives us, the following definition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Super-and SubTokenization", "sec_num": "4.4" }, { "text": "Definition 11 Y is a subtokenization of X and X is a supertokenization of Y if, for any word x in X, there exists a substring Ys of Y such that x = G(Ys). Y is a true subtokenization of X and X is a true supertokenization of Y, if Y is a subtokenization of X and X ~ Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Super-and SubTokenization", "sec_num": "4.4" }, { "text": "Example 1 (cont.) The tokenization \"th is is his book\" is a subtokenization of the critical tokenization \"this is his book\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Super-and SubTokenization", "sec_num": "4.4" }, { "text": "Example 4 (cont.) The tokenization \"the blue print\" is a subtokenization of the critical tokenization \"the blueprint\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Super-and SubTokenization", "sec_num": "4.4" }, { "text": "Y is a subtokenization of X if and only if X < Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem Lemma 3", "sec_num": "4.5" }, { "text": "If X < Y, by definition, for any substring Xs of X, there exists substring Ys of Y, such that [Xs[ < [Ys[ and G(X~) = G(Y~). Also by definition, there is x = G(x) for every single word x. As any single word in a word string is also its single-word substring, it can be concluded that for any word x in X, there exists a substring Ys of Y, such that x = G(Ys).", "cite_spans": [ { "start": 94, "end": 115, "text": "[Xs[ < [Ys[ and G(X~)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "On the other hand, if Y is a subtokenization of X, by definition, for any word x in X, there exists a substring Ys of Y such that x = G(Ys). Thus, given any substring Xs of X, Xs = Xl... x,, for any k, 1 < k < n, there exists a substring Yk of Y such that Xk = G(Yk). Denote Ys ---Y1. . . Ym, there is IXsl < IYsl and G(Xs) = G(Ys). By definition, there is X < Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[] Lemma 3 reveals that a word string is covered by another word string if and only if every word in the latter is realized in the former as a word string. In other words, a covering word string is in a more compact form than its covered word string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Every tokenization has a critical tokenization as its supertokenization, but critical tokenization has no true supertokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "That is, for any tokenization Y, Y E To(S), there exists critical tokenization X, X E Co(S), such that X is a supertokenization of Y. Moreover, if Y is a critical tokenization and X is its supertokenization, there is X = Y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "By definition, for any tokenization Y, Y E To(S), there is a critical tokenization X, X E Co(S), such that X _ Y. By Lemma 3, it would be the same as saying that X is a supertokenization of Y. The second part of the theorem is from the definition of critical tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[] Theorem 3 states that no critical tokenization can be produced by further tokenizing words in other tokenizations. However, all other tokenizations can be produced from at least one critical tokenization by further tokenizing words in it. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Since the theory of partially ordered sets is well established, we can use it to enhance our understanding of the mathematical structure of string tokenization. One of the obvious and immediate results is the concept of critical tokenization, which is simply another name for the minimal element set of a poset. The least element is another important concept. Although it may seem trivial to the string tokenization problem, the critical tokenization is, in fact, absolutely crucial. For instance, Theorem 3 states that, from critical tokenization, any tokenization can be produced (enumerated). As the number of critical tokenizations is normally considerably less than the total amount of all possible tokenizations, this theorem leads us to focus on the study of a few critical ones. In the next few sections, we shall further investigate certain important aspects of critical tokenizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.6" }, { "text": "This section clarifies the relationship between critical tokenization and various types \u2022 of tokenization ambiguities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical and Hidden Ambiguities", "sec_num": "5." }, { "text": "Let Y, be an alphabet, D a dictionary, and S a character string over the alphabet. The character string S from the alphabet G has critical ambiguity in tokenization on dictionary D if ICD(S)I > 1. S has no critical ambiguity in tokenization if ICo(S)I = 1. A tokenization W E Tp(S) has critical ambiguity in tokenization if there exists another tokenization W' E To(S), W' ~ W, such that neither W _< W' nor W' _< W holds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Ambiguity in Tokenization Definition 12", "sec_num": "5.1" }, { "text": "Example 2 (cont.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Ambiguity in Tokenization Definition 12", "sec_num": "5.1" }, { "text": "Since CD(fundsand) --(\"funds and\", \"fund sand\"}, i.e., ICD(fundsand)l = 2 > 1, the character string fundsand has critical ambiguity in tokenization. Moreover, the tokenization \"funds and\" has critical ambiguity in tokenization since there exists another possible tokenization \"fund sand\" such that both \"funds and\" <_ \"fund sand\" and \"fund sand\" <_ \"funds and\" do not hold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Ambiguity in Tokenization Definition 12", "sec_num": "5.1" }, { "text": "Since CD(theblueprint) = {\"the blueprint\"}, the character string theblueprint does not have critical ambiguity in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4 (cont.)", "sec_num": null }, { "text": "It helps to clarify that the only difference between the definition of tokenization ambiguity and that of critical ambiguity in tokenization lies in the tokenization set: While tokenization ambiguity is defined on the entire tokenization set TD(S), critical ambiguity in tokenization is defined only on the critical tokenization set CD(S), which is a subset of To(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4 (cont.)", "sec_num": null }, { "text": "As all critical tokenizations are minimal elements on the word string cover relationship, the existence of critical ambiguity in tokenization implies that the \"most powerful and commonly used\" (Chen and Liu 1992, 104) principle of maximum tokenization would not be effective in resolving critical ambiguity in tokenization and implies that other means such as statistical inferencing or grammatical reasoning have to be introduced. In other words, critical ambiguity in tokenization is unquestionably critical.", "cite_spans": [ { "start": 193, "end": 217, "text": "(Chen and Liu 1992, 104)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example 4 (cont.)", "sec_num": null }, { "text": "Critical ambiguity in tokenization is the precise mathematical description of conventional concepts such as disjunctive ambiguity (Webster and Kit [1992, 1108] , for example) and overlapping ambiguity (Sun and T'sou [1995, 121] , for example). We will return to this topic in Section 5.4.", "cite_spans": [ { "start": 130, "end": 142, "text": "(Webster and", "ref_id": "BIBREF50" }, { "start": 143, "end": 159, "text": "Kit [1992, 1108]", "ref_id": null }, { "start": 201, "end": 227, "text": "(Sun and T'sou [1995, 121]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example 4 (cont.)", "sec_num": null }, { "text": "Let ~ be an alphabet, D a dictionary, and S a character string over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hidden Ambiguity in Tokenization Definition 13", "sec_num": "5.2" }, { "text": "Example 4 (cont.) Let S = theblueprint, TD(S) = {\"the blueprint\", \"the blue print\"}, and Co(S) = {\"the blueprint\"}. Since To(S) ~ Co(S), the character sting theblueprint has hidden ambiguity in tokenization. Since \"the blueprint\" <_ \"the blue print\", the character string \"the blueprint\" has hidden ambiguity in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The character string S from the alphabet ~. has hidden ambiguity in tokenization on dictionary D if TD(S) ~ CD(S). A tokenization W c TD(S) has hidden ambiguity in tokenization if there exists another tokenization W' E TD(S), W' ~ W, such that W <__ W'.", "sec_num": null }, { "text": "Intuitively, a tokenization has hidden ambiquity in tokenization, if some words in it can be further decomposed into word strings, such as \"blueprint\" to \"blue print\". They are called hidden or invisible because others cover them. The resolution of hidden ambiguity in tokenization is the aim of the principle of maximum tokenization (Jie 1989; Jie and Liang 1991) . Under this principle, only covering tokenizations win and all covered tokenizations are discarded.", "cite_spans": [ { "start": 334, "end": 344, "text": "(Jie 1989;", "ref_id": "BIBREF19" }, { "start": 345, "end": 364, "text": "Jie and Liang 1991)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The character string S from the alphabet ~. has hidden ambiguity in tokenization on dictionary D if TD(S) ~ CD(S). A tokenization W c TD(S) has hidden ambiguity in tokenization if there exists another tokenization W' E TD(S), W' ~ W, such that W <__ W'.", "sec_num": null }, { "text": "Hidden ambiguity in tokenization is the precise mathematical description of conventional concepts such as conjunctive ambiguity (Webster and Kit [1992, 1108] , for example), combinational ambiguity (Liang [1987] , for example) and categorical ambiguity (Sun and T'sou [1995, 121] , for example). We will return to this topic in Section 5.4.", "cite_spans": [ { "start": 128, "end": 140, "text": "(Webster and", "ref_id": "BIBREF50" }, { "start": 141, "end": 157, "text": "Kit [1992, 1108]", "ref_id": null }, { "start": 198, "end": 211, "text": "(Liang [1987]", "ref_id": "BIBREF28" }, { "start": 253, "end": 279, "text": "(Sun and T'sou [1995, 121]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The character string S from the alphabet ~. has hidden ambiguity in tokenization on dictionary D if TD(S) ~ CD(S). A tokenization W c TD(S) has hidden ambiguity in tokenization if there exists another tokenization W' E TD(S), W' ~ W, such that W <__ W'.", "sec_num": null }, { "text": "Let E be an alphabet, D a dictionary, and S a character string over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ambiguity = Critical + Hidden", "sec_num": "5.3" }, { "text": "A character string S over an alphabet ~ has tokenization ambiguity on a tokenization dictionary D if and only if S has either critical ambiguity in tokenization or hidden ambiguity in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 4", "sec_num": null }, { "text": "If S has critical ambiguity in tokenization, by definition, there is ICD(S)I > 1. If S has hidden ambiguity in tokenization, by definition, there is TD(S) ~ CD(S). In both cases, since CD(S) c_C_ TD(S), there must be ITD(S)[ > 1. By definition, S has tokenization ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "If S has tokenization ambiguity, by definition, there is ITo(S)I > 1. Since any finite nonempty poset has at least one minimal element, there is [Co(S)I > 0. Since Co(S) c To(S), there is To(S) # Co(S) if ICo(S)I = 1. In this case, by definition, S has hidden ambiguity in tokenization. If ICo(S)[ > 1, by definition, S has critical ambiguity in tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[] Theorem 4 explicitly and precisely states that tokenization ambiguity is the union of critical ambiguity in tokenization and hidden ambiguity in tokenization. This result helps us in the understanding of character string tokenization ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "By freezing the problem of token identity determination, tokenization ambiguity identification and resolution are all that is required in sentence tokenization. Consequently, it must be crucial and beneficial to pursue an explicit and accurate understanding of various types of character string tokenization ambiguities and their relationships.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.4" }, { "text": "In the literature, however, the general practice is not to formally define and classify ambiguities but to apply various terms to them, such as overlapping ambiguity and combinational ambiguity in their intuitive and normally fuzzy senses. Nevertheless, efforts do exist to rigorously assign them precise, formal meanings. As a representa-tive example, in Webster and Kit (1992, 1108) , both conjunctive (combinational) and disjunctive (overlapping) ambiguities are defined in the manner given below.", "cite_spans": [ { "start": 356, "end": 367, "text": "Webster and", "ref_id": "BIBREF50" }, { "start": 368, "end": 384, "text": "Kit (1992, 1108)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.4" }, { "text": "TYPE h In a sequence of Chinese 4 characters S = al... aibl ... by, if al... ai, bl... bj , and S are each a word, then there is conjunctive ambiguity in S. The segment S, which is itself a word, contains other words. This is also known as multi-combinational ambiguity.", "cite_spans": [ { "start": 24, "end": 67, "text": "Chinese 4 characters S = al... aibl ... by,", "ref_id": null }, { "start": 68, "end": 80, "text": "if al... ai,", "ref_id": null }, { "start": 81, "end": 89, "text": "bl... bj", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "2. TYPE II: In a sequence of Chinese characters S = al ... aibl ... bjCl ... Ck, if al... aibl.., bj and bl... bjCl.. . Ck are each a word, then S is an overlapping ambiguous segment, or in other words, the segment S displays disjunctive ambiguity. The segment bl... bj is known as an overlap, which is usually one character long.", "cite_spans": [ { "start": 29, "end": 117, "text": "Chinese characters S = al ... aibl ... bjCl ... Ck, if al... aibl.., bj and bl... bjCl..", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "The definitions above contain nothing improper. In fact, conjunctive (combinational) ambiguity as defined above is a special case of hidden ambiguity in tokenization, since \"al ... aibl.., bj\" <_ \"al... ai/bl.., bj\". Moreover, disjunctive (overlapping) ambiguity is a special case of critical ambiguity in tokenization, since for the character string S = al . . . aibl . . . bjc, . . . Ck, both \"al... aibl . . . bj /Cl . . . Ck\" and \"a, ... ai/bl . . . bye1 \u2022 . . Ck\" are critical tokenizations.", "cite_spans": [ { "start": 153, "end": 252, "text": "tokenization, since \"al ... aibl.., bj\" <_ \"al... ai/bl.., bj\". Moreover, disjunctive (overlapping)", "ref_id": null }, { "start": 351, "end": 379, "text": "S = al . . . aibl . . . bjc,", "ref_id": null }, { "start": 380, "end": 389, "text": ". . . Ck,", "ref_id": null }, { "start": 390, "end": 437, "text": "both \"al... aibl . . . bj /Cl . . . Ck\" and \"a,", "ref_id": null }, { "start": 438, "end": 462, "text": "... ai/bl . . . bye1 \u2022 .", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "The definitions above, however, are neither complete nor critical. In our opinion, a definition is complete only if any phenomenon in the problem domain can be properly described (defined). With regard to the character string tokenization problem proper, this completeness requirement can be translated as: given an alphabet, a dictionary, and a character string, the definition should be sufficient to answer the following two questions: (1) does this character string have tokenization ambiguity? (2) if yes, what type of ambiguity does it have?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "The definitions above cannot fulfill this completeness requirement. For instance, if al ... ai, bl \u2022 :. by, Cl ... Ck, and al ... aibl ... bjCl .. . Ck are all words in a dictionary, the character string S = al ... aibl ... bjCl ... Ck, while intuitively in Type I (conjunctive ambiguity), is, in fact, captured neither by Type I nor by Type II.", "cite_spans": [ { "start": 85, "end": 146, "text": "al ... ai, bl \u2022 :. by, Cl ... Ck, and al ... aibl ... bjCl ..", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "We agree that, although to do so would not be trivial, it is nevertheless possible to make the definitions above complete by carefully listing and including all possible cases. However, criticality, which is what is being explored in this paper, would most probably still not be captured in such a carefully generalized ambiguity definition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "What we believe to be crucial is the association between tokenization ambiguity and the maximization or minimization property of the partially ordered set on the cover relation. As will be illustrated later in this paper, such an association is exceptionally important in attempting to understand ambiguities and in developing disambiguation strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "In short, both the cover relation and critical tokenization have given us a clear picture of character string tokenization ambiguity as expressed in Theorem 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "This section clarifies the relationship between critical tokenization (CT) and three other representative implementations of the principle of maximum tokenization, i.e., forward maximum tokenization (FT), backward maximum tokenization (BT) and shortest tokenization (ST). It will be proven that ST, FT and BT are all true subclasses of CT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Tokenization", "sec_num": "6." }, { "text": "Let G be an alphabet, D a dictionary on the alphabet, and S a character string over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forward Maximum Tokenization", "sec_num": "6.1" }, { "text": "A tokenization W = wl ... Wm E TD(S) is a forward maximum tokenization of S over and D, or FT tokenization for short, if, for any k, 1 < k < m, there exist i and j, 1 < i < j < n, such that 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 14", "sec_num": null }, { "text": "3. This definition is in fact a descriptive interpretation of the widely recommended conventional constructive forward maximum tokenization procedure (Liu 1986a (Liu , 1986b Liang 1986 Liang , 1987 Chen and Liu 1992; Webster and Kit 1992) .", "cite_spans": [ { "start": 150, "end": 160, "text": "(Liu 1986a", "ref_id": "BIBREF30" }, { "start": 161, "end": 173, "text": "(Liu , 1986b", "ref_id": "BIBREF31" }, { "start": 174, "end": 184, "text": "Liang 1986", "ref_id": "BIBREF26" }, { "start": 185, "end": 197, "text": "Liang , 1987", "ref_id": "BIBREF28" }, { "start": 198, "end": 216, "text": "Chen and Liu 1992;", "ref_id": "BIBREF4" }, { "start": 217, "end": 238, "text": "Webster and Kit 1992)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Example 3 (cont.) The character string S --abcd has the word string abc/d as its sole FT tokenization in TD(S) = {a/b/c /d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}, i.e ., FD(S) = {abc/d}. Example 2 (cont.) Fo(fundsand) = {\"funds and\"}, i.e., the character string fundsand has its sole FT tokenization \"funds and\".", "cite_spans": [ { "start": 120, "end": 173, "text": "/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}, i.e", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Example 4 (cont.) FD (S) = {\"the blueprint\"}, i.e., the word string \"the blueprint\" is the only FT tokenization for the character string S = theblueprint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "That is to say, any character string has, at most, a single FT tokenization. Moreover, if the FT tokenization exists, it is a CT tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 4 For all S E ~*, there are IFD(S)I < 1 and FD(S) c_ CD(S).", "sec_num": null }, { "text": "Certain character strings do not have FT tokenization on some dictionaries, even if they have many possible tokenizations. For example, given the alphabet G = {a, b, c, d} and the dictionary D = {a, abc, bcd}, there is TD(abcd) = {a/bcd}. But the single tokenization does not fulfill condition (3) in the definition above for k = 1, because the longer word abc exists in the dictionary. Assume both X = Xl... Xm and Y = yl... ym' are FT tokenizations, X ~ Y. Then, there must exist k, 1 < k < rain(m, m'), such that Xk, = Yk', for all k', 1 < k' < k, but Xk # yk. Since G(X) = G(Y), there must be IXkl # lYkl. Consequently, either X or Y is unable to fulfill condition (3) of definition 14. By contradiction, there must be X = Y. In other words, any character string at most has single FT tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Assume the FT tokenization X = xl ... Xm is not a CT tokenization. By Theorem 3, there must exist a CT tokenization Y = yl... ym' such that X # Y and Y < X. Thus, by the cover relation definition, for any substring Ys of Y, there exists substring Xs of X, such that IYsl < IXsl and G(Xs) = G(Ys). Since X # Y, there must exist k, 1 < k < min (m,m') , such that Xk, = yk', for all k', 1 <_ k' < k, but IXkl <_ lYkl. This leads to a conflict with condition (3) in the definition. In other words, X cannot be an FT tokenization if it is not a CT tokenization.", "cite_spans": [ { "start": 342, "end": 348, "text": "(m,m')", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let G be an alphabet, D a dictionary on the alphabet, and S a character strings over the alphabet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Backward Maximum Tokenization", "sec_num": "6.2" }, { "text": "A tokenization W = Wl-..Wm C To(S) is a backward maximum tokenization of S over G and D, or BT tokenization for short, if for any k, 1 < k < m, there exist i and j, 1 < i G j < n, such that This definition is in fact a descriptive interpretation of the widely recommended conventional constructive backward maximum tokenization procedure (Liu 1986a (Liu , 1986b Liang 1986 Liang , 1987 Chen and Liu 1992; Webster and Kit 1992) . Example 2 (cont.) For the character string S = fundsand, there is Bo(fundsand) = {\"fund sand\"}. That is, the word string \"fund sand\" is the only BT tokenization.", "cite_spans": [ { "start": 338, "end": 348, "text": "(Liu 1986a", "ref_id": "BIBREF30" }, { "start": 349, "end": 361, "text": "(Liu , 1986b", "ref_id": "BIBREF31" }, { "start": 362, "end": 372, "text": "Liang 1986", "ref_id": "BIBREF26" }, { "start": 373, "end": 385, "text": "Liang , 1987", "ref_id": "BIBREF28" }, { "start": 386, "end": 404, "text": "Chen and Liu 1992;", "ref_id": "BIBREF4" }, { "start": 405, "end": 426, "text": "Webster and Kit 1992)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Definition 15", "sec_num": null }, { "text": "Example 4 (cont.) For the character string S = theblueprint, there is BD(S) = {\"the blueprint\"}. That is, the word string \"the blueprint\" is the only BT tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Definition 15", "sec_num": null }, { "text": "For all S E ~*, there are IBD(S)I ~ 1 and BD(S) C_ CD(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 5", "sec_num": null }, { "text": "That is, any character string has at most one BT tokenization. Moreover, if the BT tokenization exists, it is a CT tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 5", "sec_num": null }, { "text": "Parallel to the proof for Lemma 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The shortest tokenization operation SD is a mapping SD: ~* --* 2 D\u2022 defined as: for any S in ~*, SD(S) = {W I IW[ = minW'ETD(s)IW'I}\" Every tokenization W in SD(S) is a shortest tokenization, or ST tokenization for short, of the character string S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shortest Tokenization Definition 16", "sec_num": "6.3" }, { "text": "In other words, a tokenization W of a character string S is a shortest tokenization if and only if the word string has the minimum word string length among all possible tokenizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shortest Tokenization Definition 16", "sec_num": "6.3" }, { "text": "This definition is in fact a descriptive interpretation of the constructive shortest path finding tokenization procedure proposed by Wang (1989) and Wang, Wang, and Bai (1991) . For the character string S = fundsand, there is SD(fundsand) = {\"funds and\", \"fund sand\"}. That is, both \"funds and\" and \"fund sand\" are ST tokenizations.", "cite_spans": [ { "start": 133, "end": 144, "text": "Wang (1989)", "ref_id": "BIBREF48" }, { "start": 149, "end": 175, "text": "Wang, Wang, and Bai (1991)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Shortest Tokenization Definition 16", "sec_num": "6.3" }, { "text": "For the character string S = theblueprint, there is SD(S) = {\"the blueprint\"}. That is, the word string \"the blueprint\" is the only ST tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example 4 (cont.)", "sec_num": null }, { "text": "SD(S) c_ Co(S) for all S E E*. That is, every ST tokenization is a CT tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 6", "sec_num": null }, { "text": "Let X be an ST tokenization, X E SD(S). Assume X is not a CT tokenization, X ~ CD(S). Then, by Theorem 3, there exists a CT tokenization Y ~ CD(S), Y ~ X, such that Y < X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "By the definition of the cover relation, there is IYI < IXI. In fact, as X ~ Y, there must be IY[ < IXI. This is in conflict with the fact that X is an ST tokenization. Hence, the lemma is proven by contradiction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[] ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The first part is the combination of Lemma 4, 5, and 6. The second part is exemplified by Example 3 above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "[]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The three tokenization definitions in this section are essentially descriptive restatements of the corresponding constructive tokenization procedures, which in turn are realizations of the widely followed principle of maximum tokenization (e.g., Liu 1986; Liang 1986a Liang , 1986b Wang 1989; Jie 1989; Wang, Su, and Mo 1990; Jie, Liu, and Liang 1991a, b; Yeh and Lee 1991; Webster and Kit 1992; Chen and Liu 1992; Guo 1993; Wu and Su 1993; Nie, Jin, and Hannah 1994; Sproat et al. 1996; Wu et al. 1994; Li et al. 1995; Sun and T'sou 1995; Wong et al. 1995; Bai 1995; Sun and Huang 1996) . The first work closest to this principle, according to Liu (1986 Liu ( , 1988 , was the 5-4-3-2-1 tokenization algorithm proposed by a Russian MT practitioner in 1956. This algorithm is a special version of the greedy-type implementation of the forward maximum tokenization and is still in active use. For instance, Yun, Lee, and Rim (1995) recently applied it to Korean compound tokenization. It is understood that forward maximum tokenization, backward maximum tokenization and shortest tokenization are the three most representative and widely quoted works following the general principle of maximum tokenization. However, the principle itself is not crystal-clear in the literature. Rather, it only serves as a general guideline, as different researchers make different interpretations. As Chen and Liu (1992, 104) noted, \"there are a few variations of the sense of maximal matching.\" Hence, many variations have been derived after decades of fine-tuning and modification. As Webster and Kit (1992, 1108) acknowledged, different realizations of the principle \"were invented one after another and seemed inexhaustible.\"", "cite_spans": [ { "start": 246, "end": 255, "text": "Liu 1986;", "ref_id": null }, { "start": 256, "end": 267, "text": "Liang 1986a", "ref_id": null }, { "start": 268, "end": 281, "text": "Liang , 1986b", "ref_id": null }, { "start": 282, "end": 292, "text": "Wang 1989;", "ref_id": "BIBREF48" }, { "start": 293, "end": 302, "text": "Jie 1989;", "ref_id": "BIBREF19" }, { "start": 303, "end": 325, "text": "Wang, Su, and Mo 1990;", "ref_id": "BIBREF47" }, { "start": 326, "end": 355, "text": "Jie, Liu, and Liang 1991a, b;", "ref_id": null }, { "start": 356, "end": 373, "text": "Yeh and Lee 1991;", "ref_id": "BIBREF56" }, { "start": 374, "end": 395, "text": "Webster and Kit 1992;", "ref_id": "BIBREF50" }, { "start": 396, "end": 414, "text": "Chen and Liu 1992;", "ref_id": "BIBREF4" }, { "start": 415, "end": 424, "text": "Guo 1993;", "ref_id": "BIBREF13" }, { "start": 425, "end": 440, "text": "Wu and Su 1993;", "ref_id": "BIBREF54" }, { "start": 441, "end": 467, "text": "Nie, Jin, and Hannah 1994;", "ref_id": null }, { "start": 468, "end": 487, "text": "Sproat et al. 1996;", "ref_id": "BIBREF43" }, { "start": 488, "end": 503, "text": "Wu et al. 1994;", "ref_id": "BIBREF53" }, { "start": 504, "end": 519, "text": "Li et al. 1995;", "ref_id": "BIBREF25" }, { "start": 520, "end": 539, "text": "Sun and T'sou 1995;", "ref_id": "BIBREF45" }, { "start": 540, "end": 557, "text": "Wong et al. 1995;", "ref_id": "BIBREF51" }, { "start": 558, "end": 567, "text": "Bai 1995;", "ref_id": "BIBREF3" }, { "start": 568, "end": 587, "text": "Sun and Huang 1996)", "ref_id": "BIBREF44" }, { "start": 645, "end": 654, "text": "Liu (1986", "ref_id": null }, { "start": 655, "end": 667, "text": "Liu ( , 1988", "ref_id": "BIBREF32" }, { "start": 906, "end": 930, "text": "Yun, Lee, and Rim (1995)", "ref_id": "BIBREF59" }, { "start": 1384, "end": 1408, "text": "Chen and Liu (1992, 104)", "ref_id": null }, { "start": 1570, "end": 1581, "text": "Webster and", "ref_id": "BIBREF50" }, { "start": 1582, "end": 1598, "text": "Kit (1992, 1108)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "While researchers generally agree that a dictionary word should be tokenized as itself, they usually have different opinions on how a non-dictionary word (critical) fragment should be tokenized. While they all agree that a certain form of extremes must be attained, they nevertheless have their own understanding of what the form should be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "Consequently, it should come as no surprise to see various kinds of theoretical generalization or summarization work in the literature. A good representative work is by Kit and his colleagues (Jie 1989; Jie, Liu, and Liang 1991a, b; Webster and Kit 1992) , who proposed a three-dimensional structural tokenization model. This model, called ASM for Automatic Segmentation Model, is capable of characterizing up to eight classes of different maximum or minimum tokenization procedures. Among the eight procedures, based on both analytical inferences and experimental studies, both forward maximum tokenization and backward maximum tokenization are recommended as good solutions. Unfortunately, in Webster and Kit (1992, 1108) , they unnecessarily made the following overly strong claim:", "cite_spans": [ { "start": 192, "end": 202, "text": "(Jie 1989;", "ref_id": "BIBREF19" }, { "start": 203, "end": 232, "text": "Jie, Liu, and Liang 1991a, b;", "ref_id": null }, { "start": 233, "end": 254, "text": "Webster and Kit 1992)", "ref_id": "BIBREF50" }, { "start": 695, "end": 706, "text": "Webster and", "ref_id": "BIBREF50" }, { "start": 707, "end": 723, "text": "Kit (1992, 1108)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "It is believed that all elemental methods are included in this model. Furthermore, it can be viewed as the ultimate model for methods of string matching of any elements, including methods for finding English idioms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "The shortest tokenization proposed by Wang (1989) provides an obvious counterexample. As Wang (1989) exemplified 6, for the alphabet G = {a, b, c, d, e} and the 6 The original example is \"~ \"~ J~ ~ -~ \", a widely quoted Chinese phrase difficult to tokenize. Its dictionary D = {a, b, c, d, e, ab, bc, cd, de}, the character string S = abcde has FT set FD(S) = {ab/cd/e}, BT set BD(S) = {a/bc/de} and ST set SD(S) = {ab/cd/e, a/bc/de, ab/c/de}. Clearly, the ST tokenization ab/c/de, which fulfills the principle of maximum tokenization and is the desired tokenization in some cases, is neither FT nor BT tokenization. Moreover, careful checking showed that the missed ST tokenization is not in any of the eight tokenization solutions covered by the ASM model. In short, the ASM model is not a complete interpretation of the principle of maximum tokenization.", "cite_spans": [ { "start": 38, "end": 49, "text": "Wang (1989)", "ref_id": "BIBREF48" }, { "start": 89, "end": 100, "text": "Wang (1989)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "Furthermore, the shortest tokenization still does not capture all the essences of the principle. \"For instance, given the alphabet G = {a, b, c, d} and the dictionary D = {a, b, c, d, ab, bc, cd}, the character string S = abcd has the same tokenization set FD(S) = BD(S) = SD(S) = {ab/cd} for FT, BT and ST, but a different CT tokenization set CD(S) = {ab/cd, a/bc/d}. In other words, the CT tokenization a/bc/d is left out in all the other three sets. As the tokenization a/bc/d is not a subtokenization of any other possible tokenizations, it fulfills the principle of maximum tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "It is now clear that, while the principle of maximum tokenization is very useful in sentence tokenization, it lacks precise understanding in the literature. Consequently, no solution proposed in the literature is complete with regards to realizing the principle. Recall that, in the previous sections, the character string tokenization operation was modeled as the inverse of the generation operation. Under the tokenization operation, every character string can be tokenized into a set of different tokenizations. The cover relationship between tokenizations was recognized and the set of tokenizations was proven to be a poset (partially ordered set) on the cover relationship. The set of critical tokenizations was defined as the set of minimum elements in the poset. In addition, it was proven that every tokenization has at least one critical tokenization as its supertokenization and only critical tokenization has no true supertokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "Consequently, a noncritical tokenization would conflict with the principle of maximum tokenization, since it is a true subtokenization of others. As compared with its true supertokenization, it requires the extra effort of subtokenization. On the other hand, a critical tokenization would fully realize the principle of maximum tokenization, since it has already attained an extreme form and cannot be simplified or compressed further. As compared with all other tokenizations, no effort can be saved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "Based on this understanding, it is now apparent why forward maximum tokenization, backward maximum tokenization, and shortest tokenization are all special cases of critical tokenization, but not vice versa. In addition, it has been proven, in Guo (1997) , that critical tokenization also covers other types of maximum tokenization implementations such as profile tokenization and shortest tokenization.", "cite_spans": [ { "start": 243, "end": 253, "text": "Guo (1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "We believe that critical tokenization is the only type of tokenization completely fulfilling the principle of maximum tokenization. In other words, critical tokenization is the precise mathematical description of the commonly adopted principle of maximum tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Principle of Maximum Tokenization", "sec_num": "6.5" }, { "text": "This section explores some helpful implications of critical tokenization in effective tokenization disambiguation and in efficient tokenization implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Discussion", "sec_num": "7." }, { "text": "desired tokenization, in many contexts, is \" ~ ~ / J~ / ~r~ -~ -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further Discussion", "sec_num": "7." }, { "text": "The relationship between the operations of sentence derivation and sentence parsing in the theory of parsing, translation, and compiling (Aho and Ullman 1972) is an obvious analogue with the relationship between the operations of character string generation and character string tokenization that are defined in this paper. As the former pair of operations is well established, and has great influence in the literature of sentence tokenization, many researchers have, either consciously or unconsciously, been trying to transplant it to the latter. We believe this worthy of reexamination.", "cite_spans": [ { "start": 137, "end": 158, "text": "(Aho and Ullman 1972)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "String Generation and Tokenization versus Language Derivation and Parsing", "sec_num": "7.1" }, { "text": "Normally, sentence derivation and parsing are governed by complex grammars. Consequently, the bulk of the work has been in developing, representing, and processing grammar. Although it is a well known fact that some sentences may have several derivations or parses, the focus has always been either on (1) grammar enhancement, such as introducing semantic categories and consistency checking rules (selectional restrictions), not to mention those great works on grammar formalisms, or on (2) ambiguity resolution, such as introducing various heuristics and tricks including leftmost parsing and operator preferences (Aho and Ullman 1972; Aho, Sethi, and Ullman 1986; Alien 1995; Grosz, Jones, and Webber 1986) .", "cite_spans": [ { "start": 616, "end": 637, "text": "(Aho and Ullman 1972;", "ref_id": "BIBREF1" }, { "start": 638, "end": 666, "text": "Aho, Sethi, and Ullman 1986;", "ref_id": "BIBREF0" }, { "start": 667, "end": 678, "text": "Alien 1995;", "ref_id": null }, { "start": 679, "end": 709, "text": "Grosz, Jones, and Webber 1986)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "String Generation and Tokenization versus Language Derivation and Parsing", "sec_num": "7.1" }, { "text": "Following this line, we observed two tendencies in tokenization research. One is the tendency to bring every possible knowledge source into the character string generation operation. For example, Gan (1995) titled his Ph.D. dissertation Integrating Word Boundary Disambiguation with Sentence Understanding. Here, in addition to traditional devices such as syntax and semantics, he even employed principles of psychology and chemistry, such as crystallization. Another is the tendency of enumerating almost blindly every heuristic and trick possible in ambiguity resolution. As Webster and Kit (1992, 1108) noted, \"segmentation methods were invented one after another and seemed inexhaustible.\" For example, Chen and Liu (1992) acknowledged that the heuristic of maximum matching alone has \"many variations\" and tested six different implementations.", "cite_spans": [ { "start": 196, "end": 206, "text": "Gan (1995)", "ref_id": "BIBREF8" }, { "start": 577, "end": 588, "text": "Webster and", "ref_id": "BIBREF50" }, { "start": 589, "end": 605, "text": "Kit (1992, 1108)", "ref_id": null }, { "start": 707, "end": 726, "text": "Chen and Liu (1992)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "String Generation and Tokenization versus Language Derivation and Parsing", "sec_num": "7.1" }, { "text": "We are not convinced of the effectiveness and necessity of both of the schools of tokenization research. The principle argument is, while research is by nature trial-anderror and different knowledge sources contribute to different facets of the solution, it is nonetheless more crucial and productive to understand where the core of the problem really lies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Generation and Tokenization versus Language Derivation and Parsing", "sec_num": "7.1" }, { "text": "As depicted in this paper, unlike general sentence derivation for complex natural languages, the character string generation process can be very simple and straightforward. Many seemingly important factors such as natural language syntax and semantics do not assume fundamental roles in the process. They are definitely helpful, but only at a later stage. Moreover, as emphasized in this paper, the tokenization set has some very good mathematical properties. By taking advantage of these properties, the tokenization problem can be greatly simplified. For example, among the huge number of possible tokenizations, we can first concentrate on the much smaller. critical tokenization set, since the former can be completely reproduced from the latter. Furthermore, by contrasting critical tokenizations, we can easily identify a few critically ambiguous positions, which allows us to avoid wasting energy at useless positions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "String Generation and Tokenization versus Language Derivation and Parsing", "sec_num": "7.1" }, { "text": "It is worth noting that similar ideas do exist in natural language derivation and parsing. For example, Seo and Simmons (1989) introduced the concept of the syntactic graph, which is, in essence, a union of all possible parse trees. With this graph representation, \"it is fairly easy to focus on the syntactically ambiguous points\" (p. 19, italics added).", "cite_spans": [ { "start": 104, "end": 126, "text": "Seo and Simmons (1989)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and the Syntactic Graph", "sec_num": "7.2" }, { "text": "These syntactically ambiguous points are critical in at least two senses. First, they are the only problems requiring knowledge and heuristics beyond the existing syntax. In other words, any syntactic or semantics development should be guided by ambiguity resolution at these points. If a semantic enhancement does not interact with any of these points, the enhancement is considered ineffective. If a grammar revision in turn leads to additional syntactically ambiguous points, such a revision would be in the wrong direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and the Syntactic Graph", "sec_num": "7.2" }, { "text": "Second, these syntactically ambiguous points are critical in efficiently resolving ambiguity. After all, these points are the only places where disambiguation decisions must be made. Ideally, we should invest no energy in investigating anything that is irrelevant to these points. However, unless all parse trees are merged together to form the syntactic graph, the only thing feasible is to check every possible position in every parse tree by applying all available knowledge and every possible heuristic, since we are unaware of the effectiveness of any checking that occurs beforehand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and the Syntactic Graph", "sec_num": "7.2" }, { "text": "The critical tokenization introduced in this paper has a similar role in string tokenization to that of the syntactic graph in sentence parsing. By Theorem 3, critical tokenization is, in essence, the union of the whole tokenization set and thus the compact representation of it. As long as the principle of maximum tokenization is accepted, the resolution of critical ambiguity in tokenization is the only problem requiring knowledge and heuristics beyond the existing dictionary. In other words, any introduction of \"high-level\" knowledge must at least be effective in resolving some critical ambiguities in tokenization. This should be a fundamental guideline in tokenization research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and the Syntactic Graph", "sec_num": "7.2" }, { "text": "Even if the principle of maximum tokenization is not accepted, critical ambiguity in tokenization must nevertheless be resolved. Therefore, any investment, as mentioned above, will not be a waste in any sense. What needs to be undertaken now is to substitute something more precise for the principle of maximum tokenization. It is only at this stage that we touch on the problem of identifying and resolving hidden ambiguity in tokenization. That is one of the reasons why this type of ambiguity is called hidden.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and the Syntactic Graph", "sec_num": "7.2" }, { "text": "The theme in this paper is to study the problem of sentence tokenization in the framework of formal languages, a direction that has recently attracted some attention. For instance, in Ma (1996) , words in a tokenization dictionary are represented as production rules and character strings are modeled as derivatives of these rules under a string concatenation operation. Although not stated explicitly in his thesis, this is obviously a finite-state model, as evidenced from his employment of (finite-) state diagrams for representing both the tokenization dictionary and character strings. The weighted finite-state transducer model developed by Sproat et al. (1996) is another excellent representative example.", "cite_spans": [ { "start": 184, "end": 193, "text": "Ma (1996)", "ref_id": "BIBREF37" }, { "start": 647, "end": 667, "text": "Sproat et al. (1996)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "They both stop at merely representing possible tokenizations as a single large finite-state diagram (word graph). The focus is then shifted to the problem of defining scores for evaluating each possible tokenization and to the associated problem of searching for the best-path in the word graph. To emphasize this point, Ma (1996) explicitly called his approach \"evaluation-based.\"", "cite_spans": [ { "start": 321, "end": 330, "text": "Ma (1996)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "In comparison, we have continued within the framework and established the critical tokenization together with its interesting properties. We believe the additional step is worthwhile. While tokenization evaluation is important, it would be more effective if employed at a later stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "On the one hand, critical tokenization can help greatly in developing tokenization knowledge and heuristics, especially those tokenization specific understandings, such as the observation of \"one tokenization per source\" and the trick of highlighting hidden ambiguities by contrasting competing critical tokenizations (Guo 1997) .", "cite_spans": [ { "start": 318, "end": 328, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "While it may not be totally impossible to fully incorporate such knowledge and heuristics into the general framework of path evaluation and searching, they are apparently employed neither in Sproat et al. (1996) nor in Ma (1996) . Further, what has been implemented in the two systems is basically a token unigram function, which has been shown to be practically irrelevant to hidden ambiguity resolution and not to be much better than some simple maximum tokenization approaches such as shortest tokenization (Guo 1997) .", "cite_spans": [ { "start": 191, "end": 211, "text": "Sproat et al. (1996)", "ref_id": "BIBREF43" }, { "start": 219, "end": 228, "text": "Ma (1996)", "ref_id": "BIBREF37" }, { "start": 510, "end": 520, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "On the other hand, critical tokenization can help significantly in boosting tokenization efficiency. As has been observed, the tokenization of about 98% of the text can be completed in the first parse of critical point identification, which can be done in linear time. Moreover, as practically all acceptable tokenizations are critical tokenizations and ambiguous critical fragments are generally very short, the remaining 2% of the text with tokenization ambiguities can also be settled efficiently through critical tokenization generation and disambiguation (Guo 1997) .", "cite_spans": [ { "start": 560, "end": 570, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "In comparison, if the best path is to be searched on the token graph of a complete sentence, while a simple evaluation function such as token unigram cannot be very effective in ambiguity resolution, a sophisticated evaluation function incorporating multiple knowledge sources, such as language experiences, statistics, syntax, semantics, and discourse as suggested in Ma (1996) , can only be computationally prohibitive, as Ma himself acknowledged.", "cite_spans": [ { "start": 369, "end": 378, "text": "Ma (1996)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "In summary, the critical tokenization is crucial both in knowledge development for effective tokenization disambiguation and in system implementation for complete and efficient tokenization. Further discussions and examples can be found in Guo (1997) .", "cite_spans": [ { "start": 240, "end": 250, "text": "Guo (1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Critical Tokenization and Best-Path Finding", "sec_num": "7.3" }, { "text": "The objective in this paper has been to lay down a mathematical foundation for sentence tokenization. As the basis of the overall mathematical model, we have introduced both sentence generation and sentence tokenization operations. What is unique here is our attempt to model sentence tokenization as the inverse problem of sentence generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "Upon that basis, both critical point and critical fragment constitute our first group of findings. We have proven that, under a complete dictionary assumption, critical points in sentences are all and only unambiguous token boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "Critical tokenization is the most important concept among the second group of findings. We have proven that every tokenization has a critical tokenization as its supertokenization. That is, any tokenization can be reproduced from a critical tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "Critical ambiguity and hidden ambiguity in tokenization constitute our third group of findings. We have proven that tokenization ambiguity can be categorized as either critical type or hidden type. Moreover, it has been shown that critical tokenization provides a sound basis for precisely describing various types of tokenization ambiguities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "In short, we have presented a complete and precise understanding of ambiguity in sentence tokenizations. While the existence of tokenization ambiguities is jointly described by critical points and critical fragments, the characteristics of tokenization ambiguities will be jointly specified by critical ambiguities and hidden ambiguities. Moreover, we have proven that the three widely employed tokenization algorithms, namely forward maximum matching, backward maximum matching, and shortest length matching, are all subclasses of critical tokenization and that critical tokenization is the precise mathematical description of the principle of maximum tokenization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "In this paper, we have also discussed some important implications of the notion of critical tokenization in the area of character string tokenization research and development. In this area, our primary claim is that critical tokenization is an excellent intermediate representation that offers much assistance both in the development of effective tokenization knowledge and heuristics and in the improvement and implementation of efficient tokenization algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "Besides providing a framework to better understand previous wor k, as has been attempted here, a good formalization should also lead to new questions and insights. While some of the findings and observations achieved so far (Guo 1997) have been mentioned here, much more work remains to be done.", "cite_spans": [ { "start": 224, "end": 234, "text": "(Guo 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Summary", "sec_num": "8." }, { "text": "This definition is adapted fromAho and Ullman (1972, 15).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Even so, some researchers might still insist that the character x here is just for temporary use and cannot be regarded as a regular word with the many linguistic properties generally associated with words. Understanding the importance of such a distinction, we will use the more generic term token, rather than the loaded term word, when we need to highlight the distinction. It must be added, however, that the two are largely used interchangeably in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although Webster and Kit include the modifier Chinese, the definition has nothing to do with specific characteristics of Chinese but is general (multilingual).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note, as a widely adopted convention, in case k ~ 1, Wl \u2022 Wk_ 1 represents the empty word string v and Cl... Ck-1 represents the empty character string e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The author would like to thank Ho-Chung Lui for his supervision, and Kok-Wee Gan, Zhibiao Wu, Zhendong Dong, Paul Horng Jyh Wu, Kim-Teng Lua, Chunyu Kit, and Teow-Hin Ngair for many helpful discussions. The author is also very grateful to four anonymous reviewers for their insightful comments on earlier versions of the paper. Alexandra Vaz Hugh and Ng Chay Hwee helped in correcting grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Compilers, Principles, Techniques, and Tools", "authors": [ { "first": "Alfred", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "R", "middle": [], "last": "Sethi", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, Alfred V., R. Sethi, and Jeffrey D. Ullman. 1986. Compilers, Principles, Techniques, and Tools. Addison-Wesley Publishing Co.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Theory of Parsing", "authors": [ { "first": "Alfred", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1972, "venue": "Translation, and Compiling", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, Alfred V. and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling, Volume 1: Parsing. Prentice-Hall, Inc.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural Language Understanding", "authors": [ { "first": "James", "middle": [], "last": "Allen", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen, James. 1995. Natural Language Understanding, 2nd edition. The Benjamin/Cummings Publishing Co.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An integrated model of Chinese word segmentation and part of speech tagging", "authors": [ { "first": "Shuanhu", "middle": [], "last": "Bai", "suffix": "" } ], "year": 1995, "venue": "Advances and Applications on Computational Linguistics", "volume": "", "issue": "", "pages": "56--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bai, Shuanhu. 1995. An integrated model of Chinese word segmentation and part of speech tagging. In Liwei Chen and Qi Yuan, editors, Advances and Applications on Computational Linguistics. Tsinghua University Press, pages 56-61.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word identification for Mandarin Chinese sentences", "authors": [ { "first": "Keh-Jiann", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shing-Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)", "volume": "", "issue": "", "pages": "101--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Keh-Jiann and Shing-Huan Liu. 1992. Word identification for Mandarin Chinese sentences. In Proceedings of the 14th International Conference on Computational Linguistics (COLING'92), pages 101-107.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical models for word segmentation and unknown word resolution", "authors": [ { "first": "Tung-Hui", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Jing-Shin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Ming-Yu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 5th R.O.C. Computational Linguistics Conference (ROCLING V)", "volume": "", "issue": "", "pages": "121--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chiang, Tung-Hui, Jing-Shin Chang, Ming-Yu Lin, and Keh-Yih Su. 1992. Statistical models for word segmentation and unknown word resolution. In Proceedings of the 5th R.O.C. Computational Linguistics Conference (ROCLING V), pages 121-146, Taiwan.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic word identification in Chinese sentences by the relaxation technique", "authors": [ { "first": "C-K", "middle": [], "last": "Fan", "suffix": "" }, { "first": "W-H", "middle": [], "last": "Tsai", "suffix": "" } ], "year": 1988, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "1", "pages": "33--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan, C-K. and W-H. Tsai. 1988. Automatic word identification in Chinese sentences by the relaxation technique. Computer Processing of Chinese and Oriental Languages 4(1):33-56.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Integrating Word Boundary Disambiguation with Sentence Understanding", "authors": [ { "first": "Kok-Wee", "middle": [], "last": "Gan", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gan, Kok-Wee. 1995. Integrating Word Boundary Disambiguation with Sentence Understanding. Ph.D. dissertation, Department of Computer Science and Information Systems, National University of Singapore.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A statistically emergent approach for language processing: Application to modeling context effects in ambiguous Chinese word boundary perception", "authors": [ { "first": "", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Kok-Wee", "suffix": "" }, { "first": "Kim-Teng", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "", "middle": [], "last": "Lua", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "4", "pages": "531--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gan, Kok-Wee, Martha Palmer, and Kim-Teng Lua. 1996. A statistically emergent approach for language processing: Application to modeling context effects in ambiguous Chinese word boundary perception. Computational Linguistics 22(4):531-553.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Computational Analysis of English: A Corpus-based Approach", "authors": [ { "first": "Roger", "middle": [], "last": "Garside", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garside, Roger, Geoffrey Leech, and Geoffrey Sampson, editors. 1987. The Computational Analysis of English: A Corpus-based Approach. Longman, London.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generalized Phrase Structure Grammar", "authors": [ { "first": "G", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "E", "middle": [], "last": "Klein", "suffix": "" }, { "first": "G", "middle": [], "last": "Pullum", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, G., E. Klein, G. Pullum, and I. Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Readings in Natural Language Processing", "authors": [ { "first": "B", "middle": [ "J" ], "last": "Grosz", "suffix": "" }, { "first": "K", "middle": [ "S" ], "last": "Jones", "suffix": "" }, { "first": "B", "middle": [ "L" ], "last": "Webber", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, B. J., K. S. Jones, and B. L. Webber, editors. 1986. Readings in Natural Language Processing. M. Kaufmann Publishers.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical language modeling and some experimental results on Chinese syllables to words transcription", "authors": [ { "first": "Jin", "middle": [], "last": "Guo", "suffix": "" } ], "year": 1993, "venue": "Journal of Chinese Information Processing", "volume": "7", "issue": "1", "pages": "18--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, Jin. 1993. Statistical language modeling and some experimental results on Chinese syllables to words transcription. Journal of Chinese Information Processing 7(1):18-27.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Chinese Language Modeling for Speech Recognition", "authors": [ { "first": "Jin", "middle": [], "last": "Guo", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, Jin. 1997. Chinese Language Modeling for Speech Recognition. Ph.D. dissertation, Institute of Systems Science, National University of Singapore.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Design principles of an expert system for automatic word segmentation of written Chinese texts", "authors": [ { "first": "Kekang", "middle": [], "last": "He", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Sun", "suffix": "" } ], "year": 1991, "venue": "Journal of Chinese Information Processing", "volume": "5", "issue": "2", "pages": "1--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "He, Kekang, Hui Xu, and Bo Sun. 1991. Design principles of an expert system for automatic word segmentation of written Chinese texts. Journal of Chinese Information Processing 5(2):1-14.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A produce-test approach to automatic segmentation of written Chinese", "authors": [ { "first": "Xiangxi", "middle": [], "last": "Huang", "suffix": "" } ], "year": 1989, "venue": "Journal of Chinese Information Processing", "volume": "3", "issue": "4", "pages": "42--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Xiangxi. 1989. A produce-test approach to automatic segmentation of written Chinese. Journal of Chinese Information Processing 3(4):42-49.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Essays on Language Information Processing", "authors": [ { "first": "Changning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Xia", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, Changning and Ying Xia, editors. 1996. Essays on Language Information Processing. Tsinghua University Press, Beijing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Word Grammar", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Hudson", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hudson, R. A. 1984. Word Grammar. Basil Blackwell.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A systematic approach model for methods of Chinese automatic word segmentation and their evaluation", "authors": [ { "first": "Chunyu", "middle": [], "last": "Jie", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the Chinese Computing Conference", "volume": "", "issue": "", "pages": "71--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie, Chunyu. 1989. A systematic approach model for methods of Chinese automatic word segmentation and their evaluation. In Proceedings of the Chinese Computing Conference, pages 71-78, Beijing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "On the methods of Chinese automatic segmentation", "authors": [ { "first": "Chunyu", "middle": [], "last": "Jie", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nanyuan", "middle": [], "last": "Liang", "suffix": "" } ], "year": 1991, "venue": "Journal of Chinese Information Processing", "volume": "3", "issue": "1", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie, Chunyu, Yuan Liu, and Nanyuan Liang. 1991a. On the methods of Chinese automatic segmentation. Journal of Chinese Information Processing 3(1):1-9.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The design and implementation of the CASS practical automatic Chinese word segmentation system", "authors": [ { "first": "Chunyu", "middle": [], "last": "Jie", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nanyuan", "middle": [], "last": "Liang", "suffix": "" } ], "year": 1991, "venue": "Journal of Chinese Information Processing", "volume": "5", "issue": "4", "pages": "27--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jie, Chunyu, Yuan Liu, and Nanyuan Liang. 1991b. The design and implementation of the CASS practical automatic Chinese word segmentation system. Journal of Chinese Information Processing 5(4):27-34.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Identifying unknown words in Chinese corpora", "authors": [ { "first": "Wanying", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 1995 Natural Language Processing Paci~c Rim Symposium (NLPRS'95)", "volume": "", "issue": "", "pages": "234--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin, Wanying and Lu Chen. 1995. Identifying unknown words in Chinese corpora. In Proceedings of the 1995 Natural Language Processing Paci~c Rim Symposium (NLPRS'95), pages 234-239, Seoul.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Discrete Mathematical Structures for Computer Science", "authors": [ { "first": "Bernard", "middle": [], "last": "Kolman", "suffix": "" }, { "first": "Robert", "middle": [ "C" ], "last": "Busby", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kolman, Bernard and Robert C. Busby. 1987. Discrete Mathematical Structures for Computer Science, 2nd edition.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A tagging-based first-order Markov model approach to automatic word identification for Chinese sentences", "authors": [ { "first": "T", "middle": [ "B Y" ], "last": "Lai", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Lun", "suffix": "" }, { "first": "C", "middle": [ "F" ], "last": "Sun", "suffix": "" }, { "first": "M", "middle": [ "S" ], "last": "Sun", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 1992 International Conference on Computer Processing of Chinese and Oriental Languages", "volume": "", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lai, T. B. Y., S. C. Lun, C. F. Sun, and M. S. Sun. 1992. A tagging-based first-order Markov model approach to automatic word identification for Chinese sentences. In Proceedings of the 1992 International Conference on Computer Processing of Chinese and Oriental Languages, pages 17-23.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Corpus-based maximum-length Chinese noun phrase extraction", "authors": [ { "first": "Wen-Jie", "middle": [], "last": "Li", "suffix": "" }, { "first": "H-H", "middle": [], "last": "Pan", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "K-F", "middle": [], "last": "Wong", "suffix": "" }, { "first": "V", "middle": [], "last": "Lum", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 1995 Natural Language Processing Pacific Rim Symposium (NLPRS'95)", "volume": "", "issue": "", "pages": "246--251", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, Wen-Jie, H-H. Pan, M. Zhou, K-F. Wong, and V. Lum. 1995. Corpus-based maximum-length Chinese noun phrase extraction. In Proceedings of the 1995 Natural Language Processing Pacific Rim Symposium (NLPRS'95), pages 246-251, Seoul.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On computer automatic word segmentation of written", "authors": [ { "first": "Nanyuan", "middle": [], "last": "Liang", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, Nanyuan. 1986. On computer automatic word segmentation of written", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "CDWS--A written Chinese automatic word segmentation system", "authors": [ { "first": "Nanyuan", "middle": [], "last": "Liang", "suffix": "" } ], "year": 1987, "venue": "Journal of Chinese Information Processing", "volume": "1", "issue": "2", "pages": "44--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, Nanyuan. 1987. CDWS--A written Chinese automatic word segmentation system. Journal of Chinese Information Processing 1(2):44-52.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The knowledge of Chinese words segmentation", "authors": [ { "first": "Nanyuan", "middle": [], "last": "Liang", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "2", "pages": "29--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang, Nanyuan. 1990. The knowledge of Chinese words segmentation. Journal of Chinese Information Processing 4(2):29-33.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Language Modernization and Computer", "authors": [ { "first": "Yongquan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Yongquan. 1986a. Language Modernization and Computer. Wuhan University Press, Wuhan, China.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "On dictionary", "authors": [ { "first": "Yongquan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1986, "venue": "Journal of Chinese Information Processing", "volume": "1", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Yongquan. 1986b. On dictionary. Journal of Chinese Information Processing 1(1).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Word re-examination", "authors": [ { "first": "Yongquan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1988, "venue": "Journal of Chinese Information Processing", "volume": "2", "issue": "2", "pages": "47--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Yongquan. 1988. Word re-examination. Journal of Chinese Information Processing 2(2):47-50.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Contemporary Chinese Language Word Segmentation Specification for Information Processing and Automatic Word Segmentation Methods", "authors": [ { "first": "Yuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Xukun", "middle": [], "last": "Shen", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, Yuan, Qiang Tan, and Xukun Shen. 1994. Contemporary Chinese Language Word Segmentation Specification for Information Processing and Automatic Word Segmentation Methods. Tsinghua University Press, Beijing.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "From character to word--An application of information theory", "authors": [ { "first": "Kim", "middle": [], "last": "Lua", "suffix": "" }, { "first": "", "middle": [], "last": "Teng", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "4", "pages": "304--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua, Kim Teng. 1990. From character to word--An application of information theory. Computer Processing of Chinese and Oriental Languages 4(4):304-313.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Application of information theory binding in word segmentation", "authors": [ { "first": "Kim", "middle": [], "last": "Lua", "suffix": "" }, { "first": "", "middle": [], "last": "Teng", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "8", "issue": "1", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua, Kim Teng. 1994. Application of information theory binding in word segmentation. Computer Processing of Chinese and Oriental Languages 8(1):115-124.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Experiments on the use of bigram mutual information in Chinese natural language processing", "authors": [ { "first": "Kim", "middle": [], "last": "Lua", "suffix": "" }, { "first": "", "middle": [], "last": "Teng", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages (ICCPOL-95)", "volume": "", "issue": "", "pages": "306--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lua, Kim Teng. 1995. Experiments on the use of bigram mutual information in Chinese natural language processing. In Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages (ICCPOL-95), pages 306-313, Hawaii.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The study and realization of an evaluation-based automatic segmentation system", "authors": [ { "first": "Yan", "middle": [], "last": "Ma", "suffix": "" } ], "year": 1996, "venue": "Essays in Language Information Processing", "volume": "", "issue": "", "pages": "2--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma, Yan. 1996. The study and realization of an evaluation-based automatic segmentation system. In Changning Huang and Ying Xia, editors, Essays in Language Information Processing. Tsinghua University Press, Beijing, pages 2-36.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "A hybrid approach to unknown word detection and segmentation of", "authors": [ { "first": "Jieyun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Wanying", "suffix": "" }, { "first": "M-L", "middle": [], "last": "Hannan", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nie, Jieyun, Jin Wanying, and M-L. Hannan. 1994. A hybrid approach to unknown word detection and segmentation of", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Proceedings of the International Conference on Chinese Computing 1994 (ICCC-94)", "authors": [ { "first": "", "middle": [], "last": "Chinese", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "326--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chinese. In Proceedings of the International Conference on Chinese Computing 1994 (ICCC-94), pages 326-335, Singapore.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Broad coverage automatic morphological segmentation of German words", "authors": [ { "first": "T", "middle": [], "last": "Pachunke", "suffix": "" }, { "first": "O", "middle": [], "last": "Mertineit", "suffix": "" }, { "first": "K", "middle": [], "last": "Wothke", "suffix": "" }, { "first": "R", "middle": [], "last": "Schmidt", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)", "volume": "", "issue": "", "pages": "1218--1222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pachunke, T., O. Mertineit, K. Wothke, and R. Schmidt. 1992. Broad coverage automatic morphological segmentation of German words. In Proceedings of the 14th International Conference on Computational Linguistics (COLING'92), pages 1218-1222, Nantes, France.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Syntactic graphs: A representation for the union of all ambiguous parse trees", "authors": [ { "first": "J", "middle": [], "last": "Seo", "suffix": "" }, { "first": "R", "middle": [ "F" ], "last": "Simmons", "suffix": "" } ], "year": 1989, "venue": "Computational Linguistics", "volume": "15", "issue": "1", "pages": "19--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seo, J. and R. F. Simmons. 1989. Syntactic graphs: A representation for the union of all ambiguous parse trees. Computational Linguistics 15(1):19-32.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "A statistical method for finding word boundaries in Chinese text", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" } ], "year": 1990, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "4", "issue": "4", "pages": "336--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, Richard and Chilin Shih. 1990. A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Oriental Languages 4(4):336-349.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A stochastic finite-state word-segmentation algorithm for Chinese", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Chilin", "middle": [], "last": "Shih", "suffix": "" }, { "first": "William", "middle": [], "last": "Gale", "suffix": "" }, { "first": "Nancy", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "3", "pages": "377--404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sproat, Richard, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics 22(3):377-404.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Word segmentation and part-of-speech tagging for unrestricted Chinese texts", "authors": [ { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Changning", "middle": [], "last": "Huang", "suffix": "" } ], "year": 1996, "venue": "Tutorial given at the 1996 International Conference on Chinese Computing (ICCC-96)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Maosong and Changning Huang. 1996. Word segmentation and part-of-speech tagging for unrestricted Chinese texts. Tutorial given at the 1996 International Conference on Chinese Computing (ICCC-96), Singapore.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Ambiguity resolution in Chinese word segmentation", "authors": [ { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "T'sou", "middle": [], "last": "Benjemin", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the lOth Pacific Asia Conference on Language, Information and Computation (PACLIC-95)", "volume": "", "issue": "", "pages": "121--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, Maosong and Benjemin T'sou. 1995. Ambiguity resolution in Chinese word segmentation. In Proceedings of the lOth Pacific Asia Conference on Language, Information and Computation (PACLIC-95), pages 121-126, Hong Kong.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Identification of unknown words from corpus", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Tung", "suffix": "" }, { "first": "H", "middle": [ "J" ], "last": "Lee", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "8", "issue": "", "pages": "131--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tung, C. H. and H. J. Lee. 1994. Identification of unknown words from corpus. Computer Processing of Chinese and Oriental Languages 8(Supplement):131-146.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Automatic processing Chinese word", "authors": [ { "first": "Yongcheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Haiju", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Mo", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "4", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Yongcheng, Haiju Su, and Yan Mo. 1990. Automatic processing Chinese word. Journal of Chinese Information Processing 4(4):1-11.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Word Separating and Mutual Translation of Syllable and Character Strings", "authors": [ { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Xiaolong. 1989. Word Separating and Mutual Translation of Syllable and Character Strings. Ph.D. dissertation, Department of Computer Science and Engineering, Harbin Institute of Technology, Harbin, China.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Separating syllables and characters into words in natural language understanding", "authors": [ { "first": "Xiaolong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kaizhu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaohua", "middle": [], "last": "Bai", "suffix": "" } ], "year": 1991, "venue": "Journal of Chinese Information Processing", "volume": "5", "issue": "3", "pages": "48--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, Xiaolong, Kaizhu Wang, and Xiaohua Bai. 1991. Separating syllables and characters into words in natural language understanding. Journal of Chinese Information Processing 5(3):48-58.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Tokenization as the initial phase in NLP", "authors": [ { "first": "Jonathan", "middle": [ "J" ], "last": "Webster", "suffix": "" }, { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)", "volume": "1", "issue": "", "pages": "106--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Webster, Jonathan J. and Chunyu Kit. 1992. Tokenization as the initial phase in NLP. In Proceedings of the 14th International Conference on Computational Linguistics (COLING'92), pages 1,106-1,110, Nantes, France.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A tool for computer-assisted open response analysis", "authors": [ { "first": "K-E", "middle": [], "last": "Wong", "suffix": "" }, { "first": "H-H", "middle": [], "last": "Pan", "suffix": "" }, { "first": "B-T", "middle": [], "last": "Low", "suffix": "" }, { "first": "C-H", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "V", "middle": [], "last": "Lure", "suffix": "" }, { "first": "S-S", "middle": [], "last": "Lain", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages", "volume": "", "issue": "", "pages": "191--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wong, K-E, H-H. Pan, B-T. Low, C-H. Cheng, V. Lure, and S-S. Lain. 1995. A tool for computer-assisted open response analysis. In Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages, pages 191-198, Hawaii.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A parallel approach for identifying word boundaries in Chinese text", "authors": [ { "first": "K", "middle": [ "E" ], "last": "Wong", "suffix": "" }, { "first": "V", "middle": [ "Y" ], "last": "Lum", "suffix": "" }, { "first": "C-Y.", "middle": [], "last": "Leung", "suffix": "" }, { "first": "C-H", "middle": [], "last": "Leung", "suffix": "" }, { "first": "W-K", "middle": [], "last": "Kan", "suffix": "" }, { "first": "L-C", "middle": [], "last": "Chan", "suffix": "" } ], "year": 1994, "venue": "IPOC Technical Report", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wong, K. E, V. Y. Lum, C-Y. Leung, C-H. Leung, W-K. Kan, and L-C. Chan. 1994. A parallel approach for identifying word boundaries in Chinese text. IPOC Technical Report, /CHIRP/WP/SE/022, Department of Systems Engineering, Chinese University of Hong Kong.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Corpus-based speech and language research in the Institute of Systems Science", "authors": [ { "first": "Horng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jin", "middle": [], "last": "Jyh", "suffix": "" }, { "first": "", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Chung", "middle": [], "last": "Ho", "suffix": "" }, { "first": "Hwee Boon", "middle": [], "last": "Lui", "suffix": "" }, { "first": "", "middle": [], "last": "Low", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the International Symposium on Speech, Image Processing and Neural Networks (ISPIPNN'94)", "volume": "", "issue": "", "pages": "142--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Horng Jyh, Jin Guo, Ho Chung Lui, and Hwee Boon Low. 1994. Corpus-based speech and language research in the Institute of Systems Science. In Proceedings of the International Symposium on Speech, Image Processing and Neural Networks (ISPIPNN'94), pages 142-145, Hong Kong.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Corpus-based automatic compound extraction with mutual information and relative frequency count", "authors": [ { "first": "Ming-Wen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Keh-Yih", "middle": [], "last": "Su", "suffix": "" } ], "year": 1993, "venue": "Proceedings of R.O.C. Computational Linguistics Conference (ROCLING) VI", "volume": "", "issue": "", "pages": "207--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Ming-Wen and Keh-Yih Su. 1993. Corpus-based automatic compound extraction with mutual information and relative frequency count. In Proceedings of R.O.C. Computational Linguistics Conference (ROCLING) VI, pages 207-216, Taiwan.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "A rule-based Chinese automatic segmentation system", "authors": [ { "first": "", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Gui-Ping", "middle": [], "last": "Tian-Shun", "suffix": "" }, { "first": "Ying-Ming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1990, "venue": "Journal of Chinese Information Processing", "volume": "4", "issue": "1", "pages": "37--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao, Tian-Shun, Gui-Ping Zhang, and Ying-Ming Wu. 1990. A rule-based Chinese automatic segmentation system. Journal of Chinese Information Processing 4(1):37-43.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Rule-based word identification for Mandarin Chinese sentences--A unification approach", "authors": [ { "first": "C-L", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "H-J", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yeh, C-L. and H-J. Lee. 1991. Rule-based word identification for Mandarin Chinese sentences--A unification approach.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Analysis of Japanese compound nouns using collocation information", "authors": [ { "first": "K", "middle": [], "last": "Yosiyuki", "suffix": "" }, { "first": "T", "middle": [], "last": "Takenobu", "suffix": "" }, { "first": "T", "middle": [], "last": "Hozumi", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING'92)", "volume": "", "issue": "", "pages": "865--869", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yosiyuki, K., T. Takenobu, and T. Hozumi. 1992. Analysis of Japanese compound nouns using collocation information. In Proceedings of the 14th International Conference on Computational Linguistics (COLING'92), pages 865-869, Nantes, France.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Analysis of Korean compound nouns using statistical information", "authors": [ { "first": "B-H", "middle": [], "last": "Yun", "suffix": "" }, { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "H-C", "middle": [], "last": "Rim", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages (ICCPOL-95)", "volume": "", "issue": "", "pages": "76--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yun, B-H., H. Lee, and H-C. Rim. 1995. Analysis of Korean compound nouns using statistical information. In Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages (ICCPOL-95), pages 76-79, Honolulu, Hawaii.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "A method of word identification for Chinese by constraint satisfaction and statistical optimization techniques", "authors": [ { "first": "Jun-Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi-Da", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shun-De", "middle": [], "last": "Chen", "suffix": "" } ], "year": 1991, "venue": "Proceedings of R.O.C. Computational Linguistics Conference (ROCLING) IV", "volume": "", "issue": "", "pages": "147--165", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Jun-Sheng, Zhi-Da Chen, and Shun-De Chen. 1991. A method of word identification for Chinese by constraint satisfaction and statistical optimization techniques. In Proceedings of R.O.C. Computational Linguistics Conference (ROCLING) IV, pages 147-165, Taiwan.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "A multi-corpus approach to recognition of proper names in Chinese texts", "authors": [ { "first": "Jun-Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shun-De", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Ker", "suffix": "" }, { "first": "Y", "middle": [], "last": "Chan", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Liu", "suffix": "" } ], "year": 1994, "venue": "Computer Processing of Chinese and Oriental Languages", "volume": "8", "issue": "1", "pages": "73--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Jun-Sheng, Shun-De Chen, S. J. Ker, Y. Chan, and J. S. Liu. 1994. A multi-corpus approach to recognition of proper names in Chinese texts. Computer Processing of Chinese and Oriental Languages 8(1):73--86.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "are two tokenizations X = xl... Xs and Y = yl... yt in TD(S), such that G(xl... Xu) = Cl... cp and G(xu+l... Xs) = Cp+l...Cn for some index u, and for any index v, there is neither G(yl... yv) = cl... Cp nor G(yv+l... yt ) = Cp+l... Cn. Otherwise, the position has no tokenization ambiguity, or is an unambiguous token boundary.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "character string S = cl... Cn is a critical point, if for any word string W = wl... wm in To(S), there exists an index k, 0 < k < m, such that G(wl... Wk) =cl... Cp and G(Wk+l... Win) = Cp+l... Cn.In particular, the starting position 0 and the ending position n are the two ordinary critical points. Substring cp+l ... cq is a critical fragment of S on D, if both p and q are critical points and any other position r in between them, p < r < q, is not a critical point.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "Example 3 (cont.) The poset TD(abcd) = {a/b/c/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abe~d} can be graphically presented in the Hasse diagram in Figure 1.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": ". !:-!:-!:-? ~ ~ ~ ~!:.::?~!s~ !ii! i ! :! :!~ : -: -2: i iiiii iii ii;iiiiii55iii [::.:!5~:: :~ :::: !~ :): !::: !~:: . :~!:::::i ~: }~!:: :~, :/!:!:!:!~: : : : !~:!~!~i ]", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Hasse diagram for the poset TD(abcd) = {a/b/c/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "(cont.) LetE = {a,b,c,d} and D = {a,b,c,d, ab, bc, cd, abc, bcd}. There is CD(abcd) = {abc/d, ab/cd, a/bcd}. If D' ---{a, b, c, d, ab, bc, cd}, then Co, (abcd) = {a/bc/d, ab/cd}.", "uris": null, "num": null }, "FIGREF6": { "type_str": "figure", "text": "(cont.) GivenTD(S) = {a/b/c/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}, there is Co(S) = {abc/d, ab/cd, a/bcd} ~ To(S). By splitting the word abc in abc/d E Co(S) into a/b/c, ab/c or a/bc, we can make another three tokenizations in To(S): a/b/c/d, ab/c/d and a/bc/d. Similarly, from ab/cd, we can bring back a/b/c/d, ab/c/d and a/b/cd; and from abc/d, we can recover a/b/c/d, ab/c/d and a/bc/d. By merging all word strings produced together with word strings in Co(S) = {abc/d, ab/cd, a/bcd}, the entire tokenization set To(S) is reclaimed.", "uris": null, "num": null }, "FIGREF7": { "type_str": "figure", "text": "any i', 1 < i' < i, there is ci .... cj ~ D. The backward maximum tokenization operation is a mapping BD: ~* -~ 2 D\" defined as: for any S E ~*, Bo(S) = {W I W is a BT tokenization of S over ~ and D}.", "uris": null, "num": null }, "FIGREF8": { "type_str": "figure", "text": "For the character string S = abcd, the word string a/bcd is the only BT tokenization in To(S) = {a/b/c/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}. That is, Bo(S) = {a/bcd}.", "uris": null, "num": null }, "FIGREF9": { "type_str": "figure", "text": "Example 3 (cont.) Given the character string S = abcd. For the dictionary D = {a, b, c, d, ab, bc, cd, abc, bcd}, both abc/d and a/bcd are ST tokenizations in TD(S) = {a/b/c/d, a/b/cd, a/bc/d, a/bcd, ab/c/d, ab/cd, abc/d}. That is, SD(S) = {abc/d, a/bcd}. For D' = {a, b, c, d, ab, bc, cd}, however, there is SD, (S) = {ab/cd}. Note, in this case, the CT tokenization a/bc/d is not in So,(S). Example 2 (cont.)", "uris": null, "num": null }, "FIGREF10": { "type_str": "figure", "text": "(S)UBD(S ) C CD(S) and SD(S) C_ Co(S) for all S E G*. Moreover, there exists S E E*, such that FD(S) t_; BD(S) \u2022 Co(S) or SD(S) # CD(S). That is, the forward maximum tokenization, the backward maximum tokenization, and the shortest tokenization are all true subclasses of critical tokenization.", "uris": null, "num": null } } } }