{ "paper_id": "C96-1036", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:51:55.686031Z" }, "title": "N-th Order Ergodie Multigram HMM for Modeling of Languages without Marked Word Boundaries", "authors": [ { "first": "Hubert Hin-Cheung", "middle": [], "last": "Law", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of IIong Kong", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "I,;rgodie IIMMs have been successfully used for modeling sentence production. llowever for some oriental languages such as Chinese, a word can consist of multiple characters without word boundary markers between adjacent words in a sentence. This makes wordsegmentation on the training and testing data necessary before ergodic ItMM can be applied as the langnage model. This paper introduces the N-th order Ergodic Mnltigram HMM for language modeling of such languages. Each state of the IIMM can generate a variable number of characters corresponding to one word. The model can be trained without wordsegmented and tagged corpus, and both segmentation and tagging are trained in one single model. Results on its applicw Lion on a Chinese corpus are reported.", "pdf_parse": { "paper_id": "C96-1036", "_pdf_hash": "", "abstract": [ { "text": "I,;rgodie IIMMs have been successfully used for modeling sentence production. llowever for some oriental languages such as Chinese, a word can consist of multiple characters without word boundary markers between adjacent words in a sentence. This makes wordsegmentation on the training and testing data necessary before ergodic ItMM can be applied as the langnage model. This paper introduces the N-th order Ergodic Mnltigram HMM for language modeling of such languages. Each state of the IIMM can generate a variable number of characters corresponding to one word. The model can be trained without wordsegmented and tagged corpus, and both segmentation and tagging are trained in one single model. Results on its applicw Lion on a Chinese corpus are reported.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Statistical language modeling offers advantages including minimal domain specific knowledge and hand-written rules, trainability and scalability given a language corpus. Language models, such as N-gram class models (Brown et al., 1992) and Ergodic Hidden Markov Models (Kuhn el, al., 1994) were proposed and used in applications such as syntactic class (POS) tagging for English (Cutting et al., 1992) , clustering and scoring of recognizer sentence hypotheses.", "cite_spans": [ { "start": 215, "end": 235, "text": "(Brown et al., 1992)", "ref_id": "BIBREF0" }, { "start": 269, "end": 289, "text": "(Kuhn el, al., 1994)", "ref_id": null }, { "start": 379, "end": 401, "text": "(Cutting et al., 1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "IIowever, in Chinese and many other oriental languages, there are no boundary markers, such as space, between words. Therefore preprocessors have to be used to perform word segmentation in order to identify individual words before applying these word-based language models. As a result current approaches to modeling these languages are separated into two seperated processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "Word segmentation is by no means a trivial process, since ambiguity often exists. Pot proper segmentation of a sentence, some linguistic information of the sentence should be used. iIowever, commonly used heuristics or statistical based approaches, such as maximal matching, fl'equency counts or mutual information statistics, have to perform the segmentation without knowledge such as the resulting word categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "To reduce the impact of erroneous segmentation on the subsequent language model, (Chang and Chan, 1993) used an N-best segmentation interface between them. llowever, since this is still a two stage model, the parameters of the whole model cannot be optimized together, and an Nbest interface is inadequate for processing outputs from recognizers which can be highly ambiguous.", "cite_spans": [ { "start": 81, "end": 103, "text": "(Chang and Chan, 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "A better approach :is to keep all possible segmentations in a lattice form, score the lattice with a language model, and finally retrieve the best candidate by dynamic programming or some searching algorithms. N-gram models arc usually used for scoring (Gu et al., 1991 ) (Nagata, 1994 , but their training requires the sentences of the corpus to be manuMly segmented, and even class-tagged if class-based N-gram is used, as in (Nagata, 1994) .", "cite_spans": [ { "start": 253, "end": 269, "text": "(Gu et al., 1991", "ref_id": "BIBREF5" }, { "start": 270, "end": 285, "text": ") (Nagata, 1994", "ref_id": null }, { "start": 428, "end": 442, "text": "(Nagata, 1994)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "A language model which considers segmentation ambiguities and integrates this with a Ngram model, and able to be trained and tested on a raw, unsegmented and untagged corpus, is highly desirable for processing languages without marked word boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "1" }, { "text": "Based on the Hidden Markov Model, the Ergodic Multigram llidden Markov Model (l,aw and Chan, 1996) , when applied as a language model, can process directly on unsegmented input corpus as it allows a variable mmfl)er of characters in each word class. Other than that its prol)erties are sin> liar to l';rgodic tlidden Markov Models (Kuhn ct al., 1994) , that both training and scoring can be done directly on a raw, unCagged corpus, given a lexicon with word classes. Specifically, the N-Oh order F, rgodic Multigram It M M, as in conventional class-based (N+I)-gram model, assumes a (loubly stochastic process in sentence production. The word-class sequence in a scalene(: follows Che N-Oh order Markov assulnl> tion, i.e. tile identity of a (:lass in the s('.lite[Ic(~ delmn(Is only on tim previous N classes, and the word observed depelads only on the class it l)elongs to. The difference is thai, this is a multigram model (Doligne and Bimbot, 1995) in the sense Chat each state (i.e. node in the IIMM) (:a,t genera.re a wu-iable number of ot)served character sequences. Sentence boundaries are inodelcd as a sl)ecial class.", "cite_spans": [ { "start": 77, "end": 98, "text": "(l,aw and Chan, 1996)", "ref_id": null }, { "start": 331, "end": 350, "text": "(Kuhn ct al., 1994)", "ref_id": null }, { "start": 926, "end": 952, "text": "(Doligne and Bimbot, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "This model can be apl/lied to a.ll input sent(race or a characCer lattice as a language model. 'Fhe maxinnun likelihood scat(: sequence through l,he model, obtaine(t using the ViCerl)i or Stack I)(> coding AlgoriChln, ret)resenCs the 1)est particular segmentation and class-tagging for the input sentence or lattice, since transition of states denotes a wor(t boundary and state identity denotes tile ClU'rent word class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "2.1" }, { "text": "Le.xi('on A lexicon (CK] P, 1993) of 78,322 words, each con~ tainiug up to 10 characters, is awdlabh~ for use ill this work. l'ractically all characters have an cnCl:y ill the lexicon, so Chat out-of-vocalmlary words are modeled as indivi(hlal eharacters. There is a total of 192 syntactic classes, arranged in a hierarchical way. For example, the month names arc deuoted by the class Ndabc, where lg denotes Nouu, Nd denotes 'lbmporal Nouns, Igda ['or 'l'im(~ lmmes and Ndab for reusabh' tilne names. '['here~ is a total of 8 major categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "Each word ill the dictionary is aullol,al.cd with one or nlore syntactic tags, tel)resenting dilferent syntactic classes Che word cnn possibly belong to. Also, a frequ(mcy count tbr each word, base(l on a certain corpus, is given, bill without inforniation on its distribution over different syntactic classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "T(:rminoh)gy I,el, )42 be the set of all Chinese words in l, hc lexicon. A word \"wk C W is made up of one or more characters, l,et ,s~ r = (.~;I, .';'21....\";T) denote, a sentence as a T-character sequence. A funcCion (5~,, is defined such Chat (Sw (~Vk, sit +r-I ) is ] if w,. is a r-character word st ... st+,,-1, and 0 otherwise. 1 1,et /2 be the Ul)per bound of r, i.e. t,11o maxinntm uumber of characters ill a word (10 ill this paper). I,et (2/ = {cl...cL} be the set, of syntactic classes, where L is the nmnber of syntactic (:lasses in the lexicon (192 in our case). Lot t? C W \u00d7 (/ denote Che relaCion for all syntactic classitications of the. lexicon, such ChaC ('tot:, el) @ C ill' cl is one of the syntactic classes tbr 'wk. Each word wk llltlSt belong to one or more of the classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "A path Chrough the model represents a particular segnmnCation and (:lass Lagging for the Sell--I,(~IIC('.. I,et \u00a37 = ('wt, (:It ; \u2022 . . ; \"Wig, Cl K ) t)e a particular segmentation and (;lass tagging for the sentence s~', where Wk is the kth word and elk dCllOtCS tllc (;lass assigned to w,:, as illustrated below. l\"(,r C Co be proper, I1' 2,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": ".,~_, ) 1 aml (wk,cl~) C l' must be saCistied, where t0 = 1, tic = 7'+ 1 and tk-j < l,, for 1 < k < K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "ItMM S|:a|;es for l;.he N-th order model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "In Che tirst order IIMM (class 1)it(am) lnodel, each I1MM state corresl)onds directly to the word-class of a word. lhlt in general, for an N-Oh order IIMM model, siuce each class depends on N previous classes, each state has to rel)lJesellt C]I(t COlil])illa- where tie iS the current word (:lass, ci, is the previ-()us word class, etc. '['here is a CeCal of L N states, which may nleall too many l)aranl('ters (l/v+l possible state transitions, each state can transit to L other states) for the model if N is anything greater th an ont.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "'1'o solve this l)rol)lem, a reasonal)le aSSllllllilion can })c luade that the d('taih'xl (;lass idea titles of a mor(~ (listanl, word have, in general, less influence than the closer ones Co the current word class. Thus instead of using C as tim classitication relation for all l)revious words, a set of I~I'he ;algorithm to bc described ;tSSUlnCs tlt~Lt, th(,.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "(:ha.r;tctcr identities arc known for the S(!lltCttC(~ 8; ?, })It(, it can *also be al)plicd when ca.ch charttctcr position sL becomes a. set of possible (:h~u'a(:ter (:~Lndida.t, es by simply letting &,,(wk,sl +''-I) --i for all words wk which can be constructed from the c]mr~t(:ter positions st...st+, 1 of the input c]mractcr lattice. This enal)les the mo(M to 1)e used as the languzLgc model component for r(!(:ognizcrs and for decoding phoncti(: input. classification relations {C(\u00b0), C(1),...C (N-l) } can be used, where C(\u00b0) = C represents the original, most detailed classification relation for the current word, and C (n) is the less detailed classification scheme for the nth previous word at each state. Thus the number of states reduces", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "to LQ ----L(\u00b0)L (1) ...L (N-l) in which L('0 _ < L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "Each state is represented as Qi = (c~\u00b0o)... elN-_~ O) where C (n) = {cln)}, 1 < I < L (n) is the class tag set for the nth previous word.", "cite_spans": [ { "start": 44, "end": 53, "text": "elN-_~ O)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "However, if no constraints are imposed on the series of classification relations C Oo , the number of possible transitions may increase despite a decrease in the number of states, since state transitions may become possible between every two state, resulting in a total of L(\u00b0)2L (02 ... L (N-1)2 possible transitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "A constraint is imposed that, given that a word belongs to the class cl n) in the classification C (n),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "we can determine the corresponding word class c}, ~+0 the given word will belong to in C(~+1), and for every word there is no extra classifications in C (n+l) not corresponding to one in C (n).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "Formally, there exist mapping functions 5 c('0 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "COO ~ C(\"+0, 0 _< n _< N-2, such that if C(n) ~(n+l)] ~ .~'(n) then ((wk, cl n)) 6 C (n)) => I ' '~1 ~ ) , (n+l), C(n+l)) (wk,c v ) 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "for all wk 6 W, and that y(n) is surjective. In particular, to model sentence boundaries, we allow $ to be a valid class tag for all C(n), and define 5e('~)($) = 2. This constraint is easily satisfied by using a hierarchical word-class scheme, such as the one in the CKIP lexicon or one generated by hierarchical word-clustering, so that the classification for more distant words (higher n in C (n)) uses a higher level, less detail tag set in the scheme. using Nth order Markov assumption and representing the class history as HMM states. $ denotes the sentence boundary, elk is $ for k _< 0, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.4", "sec_num": null }, { "text": "Q~k re(\u00b0) c! N-l) ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Likelihood Formulation", "sec_num": "2.5" }, { "text": "Note that Qlk can be de-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Likelihood Formulation", "sec_num": "2.5" }, { "text": "I lk * \" ' ~k--N+l \"\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Likelihood Formulation", "sec_num": "2.5" }, { "text": "termined from clk and Qlk-~ due to the constraint on the classification, and thus P(Qzk]Qlk_~) = P(ct~ IQl~-~).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Likelihood Formulation", "sec_num": "2.5" }, { "text": "The likelihood of the sentence s T under the model is given by the sum of the likelihoods of its possible segmentations. Given the segmentation and class sequence \u00a3 of a sentence, the state sequence (Qz~ ... QI~) can be derived from the class sequence (eh...ci~.). Thus the observation probability of the sentence Given this tbrmulation the training procedure is mostly similar to that of the first order Ergodic Mnltigram HMM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Likelihood Formulation", "sec_num": "2.5" }, { "text": "The forward variable is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forward and Backward Procedure", "sec_num": "3.2" }, { "text": "O't(i) = P(S1.-. St, QI(t)-\" Qi[ ~)N)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Forward and Backward Procedure", "sec_num": "3.2" }, { "text": "where Q~(t) is the state of the [IMM when the word containing the character st as the last character is produced. for I 'Daining Size2d6898K194.009 214.096 246.613 286.7211.3M126.084 122.304 121.606 121.7766.3M118.531)." } } } }