Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y16-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:47:26.871738Z"
},
"title": "A Generalized Framework for Hierarchical Word Sequence Language Model",
"authors": [
{
"first": "Xiaoyi",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Computational Linguistics Laboratory",
"location": {
"postCode": "8916-5",
"settlement": "Takayama, Ikoma",
"country": "Nara Japan"
}
},
"email": "xiaoyi-w@is.naist.jp"
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Computational Linguistics Laboratory",
"location": {
"postCode": "8916-5",
"settlement": "Takayama, Ikoma",
"country": "Nara Japan"
}
},
"email": "kevinduh@is.naist.jp"
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology Computational Linguistics Laboratory",
"location": {
"postCode": "8916-5",
"settlement": "Takayama, Ikoma",
"country": "Nara Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Language modeling is a fundamental research problem that has wide application for many NLP tasks. For estimating probabilities of natural language sentences, most research on language modeling use n-gram based approaches to factor sentence probabilities. However, the assumption under n-gram models is not robust enough to cope with the data sparseness problem, which affects the final performance of language models. At the point, Hierarchical Word Sequence (abbreviated as HWS) language models can be viewed as an effective alternative to normal n-gram method. In this paper, we generalize HWS models into a framework, where different assumptions can be adopted to rearrange word sequences in a totally unsupervised fashion, which greatly increases the expandability of HWS models. For evaluation, we compare our rearranged word sequences to conventional n-gram word sequences. Both intrinsic and extrinsic experiments verify that our framework can achieve better performance, proving that our method can be considered as a better alternative for ngram language models.",
"pdf_parse": {
"paper_id": "Y16-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "Language modeling is a fundamental research problem that has wide application for many NLP tasks. For estimating probabilities of natural language sentences, most research on language modeling use n-gram based approaches to factor sentence probabilities. However, the assumption under n-gram models is not robust enough to cope with the data sparseness problem, which affects the final performance of language models. At the point, Hierarchical Word Sequence (abbreviated as HWS) language models can be viewed as an effective alternative to normal n-gram method. In this paper, we generalize HWS models into a framework, where different assumptions can be adopted to rearrange word sequences in a totally unsupervised fashion, which greatly increases the expandability of HWS models. For evaluation, we compare our rearranged word sequences to conventional n-gram word sequences. Both intrinsic and extrinsic experiments verify that our framework can achieve better performance, proving that our method can be considered as a better alternative for ngram language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Probabilistic Language Modeling is a fundamental research direction of Natural Language Processing. It is widely used in various application such as machine translation (Brown et al., 1990) , spelling correction (Mays et al., 1990) , speech recognition (Ra-biner and Juang, 1993) , word prediction (Bickel et al., 2005) and so on.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF4"
},
{
"start": 212,
"end": 231,
"text": "(Mays et al., 1990)",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 279,
"text": "(Ra-biner and Juang, 1993)",
"ref_id": null
},
{
"start": 298,
"end": 319,
"text": "(Bickel et al., 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most research about Probabilistic Language Modeling, such as Katz back-off (Katz, 1987) , Kneser-Ney (Kneser and Ney, 1995) , and modified Kneser-Ney (Chen and Goodman, 1999) , only focus on smoothing methods because they all take the n-gram approach (Shannon, 1948) as a default setting for modeling word sequences in a sentence. Yet even with 30 years worth of newswire text, more than one third of all trigrams are still unseen (Allison et al., 2005) , which cannot be distinguished accurately even using a high-performance smoothing method such as modified Kneser-Ney (abbreviated as MKN) .",
"cite_spans": [
{
"start": 75,
"end": 87,
"text": "(Katz, 1987)",
"ref_id": "BIBREF11"
},
{
"start": 90,
"end": 123,
"text": "Kneser-Ney (Kneser and Ney, 1995)",
"ref_id": "BIBREF12"
},
{
"start": 150,
"end": 174,
"text": "(Chen and Goodman, 1999)",
"ref_id": "BIBREF7"
},
{
"start": 251,
"end": 266,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF17"
},
{
"start": 431,
"end": 453,
"text": "(Allison et al., 2005)",
"ref_id": "BIBREF0"
},
{
"start": 561,
"end": 592,
"text": "Kneser-Ney (abbreviated as MKN)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An alternative solution is to factor the language model probabilities such that the number of unseen sequences are reduced. It is necessary to extract them in another way, instead of only using the information of left-to-right continuous word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Guthrie et al., 2006) , skip-gram (Huang et al., 1993) 1 is proposed to overcome the data sparseness problem. For each n-gram word sequence, the skip-gram model enumerates all possible word combinations to increase valid sequences. This has truly helped to decrease the unseen sequences, but we should not neglect the fact that it also brings a greatly increase of processing time and redundant contexts.",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "(Guthrie et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 38,
"end": 58,
"text": "(Huang et al., 1993)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Wu and Matsumoto, 2014) , a heuristic approach is proposed to convert any raw sentence into a hierarchical word sequence (abbreviated as HWS) structure, by which much more valid word sequences can be modeled while remaining the model size as small as that of n-gram. In , instead of only using the information of word frequency, the information of direction and word association are also used to construct higher quality HWS structures. However, they are all specific methods based on certain heuristic assumptions. For the purpose of further improvements, it is also necessary to generalize those models into one unified structure.",
"cite_spans": [
{
"start": 3,
"end": 27,
"text": "(Wu and Matsumoto, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. In Section 2, we review the HWS language model. Then we present a generalized hierarchical word sequence structure (GHWSS) in Section 3. In Section 4, we present two strategies for rearranging word sequences under the framework of GHWSS. In Sections 5 and 6, we show the effectiveness of our model by both intrinsic experiments and extrinsic experiments. Finally, we summarize our findings in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In (Wu and Matsumoto, 2014) , the HWS structure is constructed from training data in an unsupervised way as follows:",
"cite_spans": [
{
"start": 3,
"end": 27,
"text": "(Wu and Matsumoto, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "Suppose that we have a frequency-sorted vocabulary list",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "V = {v 1 , v 2 , ..., v m }, where C(v 1 ) \u2265 C(v 2 ) \u2265 ... \u2265 C(v m ) 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "According to V , given any sentence S = w 1 , w 2 , ..., w n , the most frequently used word w i \u2208 S(1 \u2264 i \u2264 n) can be selected 3 for splitting S into two substrings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "S L = w 1 , ..., w i\u22121 and S R = w i+1 , ..., w n . Sim- ilarly, for S L and S R , w j \u2208 S L (1 \u2264 j \u2264 i \u2212 1) and w k \u2208 S R (i + 1 \u2264 k \u2264 n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "can also be selected, by which S L and S R can be splitted into two smaller substrings separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "Executing this process recursively until all the substrings become empty strings, then a tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "T = ({w i , w j , w k , ...}, {(w i , w j ), (w i , w k ), ...})",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "can be generated, which is defined as an HWS structure (Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 65,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "In an HWS structure T , assuming that each node depends on its preceding n-1 parent nodes, then spe-2 C(v) represents the frequency of v in a certain corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "3 If wi appears multiple times in S, then select the first one. The advantage of HWS models can be considered as discontinuity. Taking Figure 1 as an example, since n-gram model is a continuous language model, in its structure, the second 'as' depends on 'soon', while in the HWS structure, the second 'as' depends on the first 'as', forming a discontinuous pattern to generate the word 'soon', which is closer to our linguistic intuition. Rather than 'as soon ...', taking 'as ... as' as a pattern is more reasonable because 'soon' is quite easy to be replaced by other words, such as 'fast', 'high', 'much' and so on. Consequently, even using 4-gram or 5-gram, sequences consisting of 'soon' and its nearby words tend to be lowfrequency because the connection of 'as...as' is still interrupted. On the contrary, the HWS model extracts sequences in a discontinuous way, even 'soon' is replaced by another word, the expression 'as...as' won't be affected. This is how the HWS models relieve the data sparseness problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "The HWS model is essentially an n-gram language model based on a different assumption that a word depends upon its nearby high-frequency words instead of its preceding words. Different from other special n-gram language models, such as class-based language model (Brown et al., 1992 ), factored language model(FLM) (Bilmes and Kirchhoff, 2003 ), HWS language model doesn't use any specific linguistic knowledge or any abstracted categories. Also, differs from dependency tree language models (Shen et al., 2008) (Chen et al., 2012) , HWS language model constructs a tree structure in an unsupervised fashion.",
"cite_spans": [
{
"start": 263,
"end": 282,
"text": "(Brown et al., 1992",
"ref_id": "BIBREF5"
},
{
"start": 315,
"end": 342,
"text": "(Bilmes and Kirchhoff, 2003",
"ref_id": "BIBREF3"
},
{
"start": 492,
"end": 511,
"text": "(Shen et al., 2008)",
"ref_id": "BIBREF18"
},
{
"start": 512,
"end": 531,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "In HWS structure, word sequences are adjusted so that irrelevant words can be filtered out from contexts and long distance information can be used for predicting the next word, which make it more effective and flexible in relieving the data sparseness problem. On this point, it has something in common with structured language model (Chelba, 1997) , which firstly introduced parsing into language modeling. The significant difference is, structured language model is based on CFG parsing structures, while HWS model is based on patternoriented structures.",
"cite_spans": [
{
"start": 334,
"end": 348,
"text": "(Chelba, 1997)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Review of HWS Language Model",
"sec_num": "2"
},
{
"text": "Suppose we are given a sentence s = w 1 , w 2 , ..., w n and a permutation function f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "s \u2192 s , where s = w 1 , w 2 , ..., w n is a permutation of s. For each word index i(1 \u2264 i \u2264 n, w i \u2208 s), there is a corresponding reordered index j(1 \u2264 j \u2264 n, w j \u2208 s , w j = w i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "Then we create an n \u00d7 n matrix A. For each row j, we fill cell A j,i with w i . We define the matrix A as the generalized hierarchical word sequence structure (abbreviated as GHWSS) of the sentence s. An example is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 232,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "In a GHWSS, given any word w \u2208 {A j,i |w j = w i }, the words in its higher rows are Figure 2 , given the word 'soon', its higher rows X = {as, as, possible, .}, in which the nearest neighbors of 'soon' arel = as and = as, since the second 'as' is closer to 'soon' vertically, we assume 'soon' depends the second 'as' in this GHWSS.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "X = {A k,m |k < j, 1 \u2264 m \u2264 n, w k = w m }, in which the nearest two neighbors of w arel = A k l ,m l (k l < j,m l = argmin 1\u2264m<i (i \u2212 m)) andr = A kr,mr (k r < j,m r = argmin i<m\u2264n (m \u2212 i)) respectively 4 . Then we as- sume that w depends on\u0175 =l if k l > k r or\u0175 =r if k l < k r . For example, in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "Further, for the word A 1,i , we define that it depends on symbol ' s '. We also use the symbol ' /s ' to represent the end of generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "For each word w = A j,i , if we assume that it only depends on its previous few words in its dependency chain, then we can achieve special n-grams under the GHWSS. Taking Figure 2 as the example, we can train 3-grams like {( s , s , .), ( s , ., as), (., as, as), (as, as, possible), (as, possible, /s ), (as, as, soon), (as, soon, /s )}.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "In , it is verified that the performance of HWS model can be further improved by using directional information. Thus, in this paper, we defaultly use directional information to model word sequences. Then the above 3-grams should be {",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "( s , s , .), ( s , .-R, /s ), ( s , .-L, as), (.-L, as-L, /s ), (.-L, as-R, as), (as-R, as-L, soon), (as-L, soon-L, /s ), (as-L, soon-R, /s ), (as-R, as-R, possible), (as-R, possible-L, /s ), (as- R, possible-R, /s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "and the probability of the whole sentence 'as soon as possible .' can be estimated by the product of conditional probabilities of all these word sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Hierarchical Word Sequence Structure",
"sec_num": "3"
},
{
"text": "Once a permutation function f is implemented, the GHWSS of any sentence can be constructed. Thus, the performance of GHWSS is totally determined by how to implement the function f for rearranging word sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Strategies for constructing GHWSS",
"sec_num": "4"
},
{
"text": "Since n-gram models assume that a word depends on its previous n-1 words, the function f of n-gram methods can be considered as the identity permutation. For each word w i , we fill cell A i,i with w i , then the n-gram method is a special case of GHWSS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Strategies for constructing GHWSS",
"sec_num": "4"
},
{
"text": "In this section, we propose two kinds of methods for implementing function f under GHWSS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Strategies for constructing GHWSS",
"sec_num": "4"
},
{
"text": "Step 1. Calculate word frequencies from training data and sort all words by their frequency. Assume we get a frequency-sorted list",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Based Method",
"sec_num": "4.1"
},
{
"text": "V = {v 1 , v 2 , ..., v m }, where C(v j ) > C(v j+1 ), 1 \u2264 j \u2264 m \u2212 1. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Based Method",
"sec_num": "4.1"
},
{
"text": "Step 2. According to V , for each sentence s = w 1 , w 2 , ..., w n , we permute it into s =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Based Method",
"sec_num": "4.1"
},
{
"text": "w 1 , w 2 , ..., w n (w k = v x , w k+1 = v y , 1 \u2264 k \u2264 n \u2212 1, 1 \u2264 x \u2264 y \u2264 m).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Based Method",
"sec_num": "4.1"
},
{
"text": "Then the GHWSS constructed by the permutation s is equivalent to that of frequency-based HWS method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Based Method",
"sec_num": "4.1"
},
{
"text": "Step 1. For each sentence s in corpus D, we convert it into s , in which each word only appear once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Step 2. For each word w i in the corpus D = {s i |1 \u2264 i \u2264 |D|}, we count its frequency C(w i ) and its cooccurrence with another word C(w i , w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Step 3. For each original sentence s \u2208 D, we initiate an empty list X and set the beginning symbol ' s ' as the initial context c 6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Step 4. For each word w \u2208 s, we calculate its word association score with context c. In this paper, we use T-score 7 as the word association measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "T (c, w) = (C(c, w) \u2212 C(c) \u00d7 C(w) V ) \u00f7 C(c, w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "(1) Then we add the i-th word\u0175 with the maximum score to list X 8 and use it to split s into two substrings s l = w 1 , ..., w i\u22121 and s r = w i+1 , ..., w n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Step 5. We set\u0175 as the new context c . For each word in s l , we calculate its word association score with c and add the word with the maximum score to list X 9 and use it to divide s l into two smaller substrings. Then we apply the same process to the substring s r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Execute Step4 and Step5 recursively until anymore substrings cannot be divided, then the original sentence s is permuted as list X, by which GHWSS of s can be constructed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "British National Corpus (BNC) 10 is a 100 million word collection of samples of written and spoken English from a wide range of sources. We use all the 6,052,202 sentences (100 million words) for the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "English Gigaword Corpus 11 consists of over 1.7 billion words of English newswire from 4 distinct international sources. We choose the wpb eng part (162,099 sentences, 20 million words) for the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "As preprocessing of the training data and the test data, we use the tokenizer of NLTK (Natural Language Toolkit) 12 to split raw English sentences into words. We also converted all words to lowercase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "To ensure the openness of our research, the source code used in the following experiments is available on the internet. 13 As intrinsic evaluation of language modeling, perplexity (Manning and Sch\u00fctze, 1999) is the most common metric used for measuring the usefulness of a language model. However, since we unsupervisedly 'parse' the test sentence s into a GHWSS structure before we estimate its probability, its conditional entropy is actually H(s|T (s)), where T (s) represents the GHWSS assigned to the test sentence s. Consequently, our method has much lower perplexity. It's not appropriate to directly compare the perplexity of GHWSS-based models to that of ngram models.",
"cite_spans": [
{
"start": 120,
"end": 122,
"text": "13",
"ref_id": null
},
{
"start": 180,
"end": 207,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "Also, perplexity is not necessarily a reliable way of determining the usefulness of a language model since a language models with low perplexity may not work well in a real world application. Thus, for intrinsic evaluation, we evaluate models only based on how much they can actually relieve the data sparseness problem (reduce the unseen sequences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "In (Wu and Matsumoto, 2014) , coverage score are used to perform this kind of evaluation. The word sequences modeled from training data are defined as TR, while that of test data as TE, then the coverage score is calculated by Equation (2). Obviously, the higher coverage score a language model can achieve, the more it can relieve the data sparseness problem (reduce the unseen sequences). ",
"cite_spans": [
{
"start": 3,
"end": 27,
"text": "(Wu and Matsumoto, 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score coverage = |T R T E| |T E|",
"eq_num": "(2)"
}
],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "If all possible word combinations are enumerated as word sequences, then considerable coverage score can be achieved. However, the processing efficiency of a model become extremely low. Thus, usage score (Equation 3)is also necessary to estimate how much redundancy is contained in a model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score usage = |T R T E| |T R|",
"eq_num": "(3)"
}
],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "A balanced measure between coverage and usage is calculated by Equation (4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Association Based Method",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 2\u00d7coverage\u00d7usage coverage + usage",
"eq_num": "(4)"
}
],
"section": "F -Score",
"sec_num": null
},
{
"text": "In this paper, we use the same metric to compare word sequences modeled under GHWSS framework with normal n-gram sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F -Score",
"sec_num": null
},
{
"text": "The result is shown in Table 1 14 . According to the results, for total word sequences, which actually affect the final performance of language models, GHWSS-based methods have obvious advantage over the normal bi-gram model. As for trigrams, the GHWSS-based methods can even improve around 25%.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "F -Score",
"sec_num": null
},
{
"text": "For the purpose of examining how our models work in the real world application, we also performed extrinsic experiments to evaluate our method. In this paper, we use the reranking of n-best translation candidates to examining how language models work in a statistical machine translation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "We use the French-English part of TED talk parallel corpus for the experiment dataset. The training data contains 139,761 sentence pairs, while the test data contains 1,617 sentence pairs. For training language models, we set English as the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "As for statistical machine translation toolkit, we use Moses system 15 to train the translation model and output 50-best translation candidates for each French sentence of the test data. Then we use 139,761 English sentences to train language models. With these models, 50-best translation candidates are reranked. According to these reranking results, the performance of machine translation system is evaluated, which also means, the language models are evaluated indirectly. In this paper, we use the following measures for evaluating reranking results 16 .",
"cite_spans": [
{
"start": 555,
"end": 557,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "BLEU (Papineni et al., 2002) : BLEU score measures how many words overlap in a given candidate translation when compared to a reference translation, which provides some insight into how good the fluency of the output from an engine will be.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "METEOR (Banerjee and Lavie, 2005) : ME-TEOR score computes a one-to-one alignment between matching words in a candidate translation and a reference.",
"cite_spans": [
{
"start": 7,
"end": 33,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "TER (Snover et al., 2006) : TER score measures the number of edits required to change a system output into one of the references, which gives an indication as to how much post-editing will be required 15 http://www.statmt.org/moses/ 16 We use open source tool multeval (https://github.com/jhclark/multeval) to perform the evaluation. on the translated output of an engine. We use GHWSS word rearranging strategies to perform experiments and compared them to the normal n-gram strategy. For estimating the probabilities of translation candidates, we use the modified Kneser-Ney smoothing (MKN) as the smoothing method of all strategies. As shown in Table 2 , GHWSS based strategies outperform that of n-gram on each score.",
"cite_spans": [
{
"start": 4,
"end": 25,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF19"
},
{
"start": 233,
"end": 235,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 648,
"end": 655,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "6"
},
{
"text": "In this paper, we proposed a generalized hierarchical word sequence framework for language modeling. Under this framework, we presented two different unsupervised strategies for rearranging word sequences, where the conventional n-gram strategy as one special case of this structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For evaluation, we compared our rearranged word sequences to conventional n-gram word sequences and performed intrinsic and extrinsic experiments. The intrinsic experiment proved that our methods can greatly relieve the data sparseness problem, while the extrinsic experiments proved that SMT tasks can benefit from our strategies. Both verified that language modeling can achieve better performance by using our word sequences rearranging strategies, which also proves that our strategies can be used as better alternatives for n-gram language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Further, instead of conventional n-gram word sequences, our rearranged word sequences can also be used as the features of various kinds of machine learning approaches, which is an interesting future study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The k-skip-n-grams for a sentence w1, ...wm is defined as the set {wi 1 , wi 2 , ...wi n |\u03a3 n j=1 ij \u2212 ij\u22121 < k}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is nol when i = 1, while nor when i = n.r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "C(vj) represents the frequency of vj.PACLIC 30 Proceedings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Intrinsic EvaluationWe use two different corpus: British National Corpus and English Gigaword Corpus.6 Since s appears only once in each sentence, we set C( s ) as the size of corpus. 7 V stands for the total number of words in corpus.8 If\u0175 appears multiple times in s, then select the first one. 9 If the context word c also appears in s l , then we regard it as the word with the maximum score and add it to X directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.natcorp.ox.ac.uk 11 https://catalog.ldc.upenn.edu/LDC2011T07 12 http://www.nltk.org 13 https://github.com/aisophie/HWS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"Unique\" means counting each word sequence only once in spite of the amount of times it really occurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantifying the Likelihood of Unseen Events: A further look at the data Sparsity problem",
"authors": [
{
"first": "B",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Allison, D. Guthrie, L. Guthrie, W. Liu, and Y Wilks. 2005. Quantifying the Likelihood of Unseen Events: A further look at the data Sparsity problem. Awaiting publication.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "S",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Banerjee and A. Lavie. 2005. Meteor: An auto- matic metric for mt evaluation with improved correla- tion with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation mea- sures for machine translation and/or summarization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Predicting sentences using n-gram language models",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Haider",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Scheffer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bickel, P. Haider, and T. Scheffer. 2005. Predicting sentences using n-gram language models. In Proceed- ings of the conference on Human Language Technol- ogy and Empirical Methods in Natural Language Pro- cessing, pages 193-200.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Factored language models and generalized parallel backoff",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Bilmes",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "2",
"issue": "",
"pages": "4--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. A. Bilmes and K. Kirchhoff. 2003. Factored language models and generalized parallel backoff. In Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguis- tics on Human Language Technology, volume 2, pages 4-6.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P.F Brown, J Cocke, S.A Pietra, V.J Pietra, F Jelinek, J.D Lafferty, R.L Mercer, and P.S Roossin. 1990. A statis- tical approach to machine translation. Computational linguistics, 16(2):79-85.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "La",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, P. V. Desouza, R. L. Mercer, V. J. D. Pietra, and J. C. La. 1992. Class-based n-gram models of nat- ural language. Computational linguistics, 18(4):467- 479.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A structured language model",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL-EACL",
"volume": "",
"issue": "",
"pages": "498--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Chelba. 1997. A structured language model. In Pro- ceedings of ACL-EACL, pages 498-500.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech and Language",
"volume": "13",
"issue": "4",
"pages": "359--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. F. Chen and J. Goodman. 1999. An empirical study of smoothing techniques for language modeling. Com- puter Speech and Language, 13(4):359-393.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Utilizing dependency language models for graph-based dependency parsing models",
"authors": [
{
"first": "W",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "213--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Chen, M. Zhang, and H Li. 2012. Utilizing depen- dency language models for graph-based dependency parsing models. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics, volume 1, pages 213-222.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A closer look at skip-gram modeling",
"authors": [
{
"first": "D",
"middle": [],
"last": "Guthrie",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Allison",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Guthrie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th international Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Guthrie, B. Allison, W. Liu, and L. Guthrie. 2006. A closer look at skip-gram modeling. In Proceedings of the 5th international Conference on Language Re- sources and Evaluation, pages 1-4.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The sphinx-ii speech recognition system: an overview",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Alleva",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Lee",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "7",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Huang, F. Alleva, H.W. Hon, M.Y. Hwang, and K. F. Lee. 1993. The sphinx-ii speech recognition system: an overview. 7(2):137-148.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer",
"authors": [
{
"first": "S",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "Acoustics, Speech and Signal Processing",
"volume": "35",
"issue": "3",
"pages": "400--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, 35(3):400-401.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. Acoustics, Speech, and Signal Processing, 1:181-184.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Foundations of statistical natural language processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. D. Manning and H. Sch\u00fctze. 1999. Foundations of statistical natural language processing. MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Context based spelling correction. Information Processing and Management",
"authors": [
{
"first": "E",
"middle": [],
"last": "Mays",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Damerau",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "27",
"issue": "",
"pages": "517--522",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Mays, F. J. Damerau, and R. L. Mercer. 1990. Context based spelling correction. Information Processing and Management, 27(5):517-522.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meet- ing on association for computational linguistics, pages 311-318.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fundamentals of Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Rabiner",
"suffix": ""
},
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Rabiner and B.H. Juang. 1993. Fundamentals of Speech Recognition. Prentice Hall.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "The Bell System Technical Journal",
"volume": "27",
"issue": "",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Shannon. 1948. A mathematical theory of commu- nication. The Bell System Technical Journal, 27:379- 423.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A new string-to-dependency machine translation algorithm with a target dependency language model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "577--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Shen, J. Xu, and R.M. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In ACL, pages 577-585.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of associ- ation for machine translation in the Americas, pages 223-231.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A hierarchical word sequence language model",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of The 28th Pacific Asia Conference on Language",
"volume": "",
"issue": "",
"pages": "489--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Wu and Y. Matsumoto. 2014. A hierarchical word sequence language model. In Proceedings of The 28th Pacific Asia Conference on Language, pages 489-494.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An improved hierarchical word sequence language model using directional information",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of The 29th Pacific Asia Conference on Language",
"volume": "",
"issue": "",
"pages": "453--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Wu and Y. Matsumoto. 2015. An improved hierarchi- cal word sequence language model using directional information. In Proceedings of The 29th Pacific Asia Conference on Language, pages 453-458.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "An improved hierarchical word sequence language model using word association",
"authors": [
{
"first": "X",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hiroyuki",
"suffix": ""
}
],
"year": 2015,
"venue": "Statistical Language and Speech Processing",
"volume": "",
"issue": "",
"pages": "275--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Wu, Y. Matsumoto, K. Duh, and S. Hiroyuki. 2015. An improved hierarchical word sequence language model using word association. In Statistical Language and Speech Processing, pages 275-287.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A comparison of structures between HWS and n-gram cial n-grams can be trained. Such kind of n-grams are defined as HWS-n-grams.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "An Example of Generative Hierarchical Word Sequence Structure",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"text": "Performance of Various Word Sequences 46.471 83.121 12.015 76.336 19.093 79.584 frequency-based-bi 46.066 89.730 12.019 86.937 19.064 88.312 tscore-based-bi 45.709 89.949 11.872 87.252 18.848 88.580",
"num": null,
"content": "<table><tr><td>Models</td><td colspan=\"2\">Coverage Unique Total Unique Usage</td><td>F-score Total Unique Total</td></tr><tr><td colspan=\"2\">bi-gram tri-gram 27.164 51.151</td><td colspan=\"2\">5.626 40.191</td><td>9.321 45.013</td></tr><tr><td colspan=\"2\">frequency-based-tri 36.512 72.432</td><td colspan=\"2\">8.546 67.221 13.850 69.729</td></tr><tr><td colspan=\"2\">tscore-based-tri 36.473 72.926</td><td colspan=\"2\">8.501 67.382 13.788 70.045</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "Performance on French-English SMT Task Using Various Word Arranging Assumptions",
"num": null,
"content": "<table><tr><td colspan=\"3\">Models BLEU METEOR TER</td></tr><tr><td>tri-gram</td><td>31.3</td><td>33.5 49.0</td></tr><tr><td>frequency-based-tri</td><td>31.5</td><td>33.6 48.6</td></tr><tr><td>tscore-based-tri</td><td>31.7</td><td>33.6 48.5</td></tr></table>",
"type_str": "table"
}
}
}
}