{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:35.440042Z" }, "title": "Interpreting Neural CWI Classifiers' Weights as Vocabulary Size", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "", "affiliation": { "laboratory": "", "institution": "Shizuoka Institute of Science and Technology", "location": { "postCode": "2200-2", "settlement": "Toyosawa, Fukuroi", "region": "Shizuoka", "country": "Japan" } }, "email": "ehara.yo@sist.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Complex Word Identification (CWI) is a task for the identification of words that are challenging for second-language learners to read. Even though the use of neural classifiers is now common in CWI, the interpretation of their parameters remains difficult. This paper analyzes neural CWI classifiers and shows that some of their parameters can be interpreted as vocabulary size. We present a novel formalization of vocabulary size measurement methods that are practiced in the applied linguistics field as a kind of neural classifier. We also contribute to building a novel dataset for validating vocabulary testing and readability via crowdsourcing.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Complex Word Identification (CWI) is a task for the identification of words that are challenging for second-language learners to read. Even though the use of neural classifiers is now common in CWI, the interpretation of their parameters remains difficult. This paper analyzes neural CWI classifiers and shows that some of their parameters can be interpreted as vocabulary size. We present a novel formalization of vocabulary size measurement methods that are practiced in the applied linguistics field as a kind of neural classifier. We also contribute to building a novel dataset for validating vocabulary testing and readability via crowdsourcing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The readability of second-language learners has attracted great interest in studies in the field of natural language processing (NLP) (Beinborn et al., 2014; Pavlick and Callison-Burch, 2016) . As NLP mainly addresses automatic editing of texts, readability assessment studies in this field have focused on identifying complex parts by assuming that the words identified are eventually simplified so that learners can read them. To this end, complex word identification (CWI) (Paetzold and Specia, 2016; Yimam et al., 2018) tasks have been studied extensively. Recently, a personalized CWI task has been proposed, where the goal of the task is to predict whether a word is complex for each learner in a personalized manner (Paetzold and Specia, 2017; Lee and Yeung, 2018) . Neural models are also employed in these studies and have achieved excellent performance.", "cite_spans": [ { "start": 134, "end": 157, "text": "(Beinborn et al., 2014;", "ref_id": "BIBREF2" }, { "start": 158, "end": 191, "text": "Pavlick and Callison-Burch, 2016)", "ref_id": "BIBREF20" }, { "start": 476, "end": 503, "text": "(Paetzold and Specia, 2016;", "ref_id": "BIBREF18" }, { "start": 504, "end": 523, "text": "Yimam et al., 2018)", "ref_id": null }, { "start": 723, "end": 750, "text": "(Paetzold and Specia, 2017;", "ref_id": "BIBREF19" }, { "start": 751, "end": 771, "text": "Lee and Yeung, 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The weights, or parameters, of a personalized high-performance neural CWI, obviously include information on how to measure the word difficulty and learner ability from a variety of features. If such information could be extracted from the model in a form that is easy to interpret, it would not only be use (Hoshino, 2009; Ehara et al., 2012 Ehara et al., , 2013 Ehara et al., , 2014 Sakaguchi et al., 2013; Ehara et al., 2016 Ehara, 2019) . To this end, this paper proposes a method for interpreting the weights of personalized neural CWI models. Let us suppose that we have a corpus and that its word frequency ranking reflects its word difficulty. Using our method, a word's difficulty can be interpreted as the frequency rank of the word in the corpus and a learner's ability can be interpreted as the vocabulary size with respect to the corpus, i.e., the number of words known to the learner when counted in a descending order of frequency in the corpus.", "cite_spans": [ { "start": 307, "end": 322, "text": "(Hoshino, 2009;", "ref_id": "BIBREF13" }, { "start": 323, "end": 341, "text": "Ehara et al., 2012", "ref_id": "BIBREF9" }, { "start": 342, "end": 362, "text": "Ehara et al., , 2013", "ref_id": "BIBREF11" }, { "start": 363, "end": 383, "text": "Ehara et al., , 2014", "ref_id": "BIBREF8" }, { "start": 384, "end": 407, "text": "Sakaguchi et al., 2013;", "ref_id": "BIBREF21" }, { "start": 408, "end": 426, "text": "Ehara et al., 2016", "ref_id": "BIBREF7" }, { "start": 427, "end": 439, "text": "Ehara, 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our key idea is to compare CWI studies with vocabulary testing studies in applied linguistics (Nation, 2006; Laufer and Ravenhorst-Kalovski, 2010) . Second-language vocabulary is extensive and occupies most of the time spent in learning a language. Vocabulary testing studies focus on measuring each learner's second language vocabulary quickly. One of the major findings of these studies is that a learner needs to \"know\" at least from 95% to 98% of the tokens in a target text to read. Here, to measure if a learner \"knows\" a word, vocabulary testing studies use the learner's vocabulary size and word frequency ranking of a balanced corpus. Hence, by formalizing the measurement method used in vocabulary testing studies as a neural personalized CWI, we can interpret neural personalized CWI models' weights as vocabulary size and word frequency ranking.", "cite_spans": [ { "start": 94, "end": 108, "text": "(Nation, 2006;", "ref_id": null }, { "start": 109, "end": 146, "text": "Laufer and Ravenhorst-Kalovski, 2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. To predict whether a learner knows a word through the use of a vocabulary test result in hand, vocabulary size-based methods were previously used for vocabulary testing. We show that this method can represent a special case of typical neural CWI classifiers that take a specific set of features as input. Furthermore, we theoretically propose novel methods that enable the weights of certain neural classifiers to become explainable on the basis of the vocabulary size of a learner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. To validate the proposed models, we want a dataset in which each learner/test-taker takes both vocabulary and reading comprehension tests. To this end, we build a novel dataset and make it publicly available. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "J learners {l 1 , l 2 , . . . , l j , . . . , l J } and I words {v 1 , v 2 , . . . , v i , . . . , v I }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "j is the index of the learners and i is the index of the words. When there is no ambiguity, we denote word v i as word i and learner l j as learner j, for the sake of simplicity. We write the rank of word v i as r i and the vocabulary size of learner l j as s j . Then, to determine whether learner l j knows word v i , the following decision function f is used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "f (l j , v i ) = s j \u2212 r i (1) Interpretting Eq. 1 is simple: if f (l j , v i ) \u2265 0, then learner l j knows word v i ; if f (l j , v i ) < 0, then learner l j does not know word v i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The performance of Eq. 1 depends solely on how to determine the vocabulary size of learner l j , s j , and the easiness rank of word v i , r i . As several methods have previously been proposed to estimate this, we describe them in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Easiness ranks of words are important in vocabulary size-based testing. To this end, word frequency rankings from a balanced corpus, especially the British National Corpus (BNC Consortium, 2007) , are used: the more frequent words in the corpus are ranked higher and considered to be easier. Some previous studies in the field manually adjust the BNC word frequency rankings to make them compatible with language teachers' intuitions. BNC collects British English. Recent studies also take into account word frequency obtained from the Corpus of Contemporary American (COCA) English (Davies, 2009) by simply adding the word frequencies of both corpora in order to obtain a word frequency ranking.", "cite_spans": [ { "start": 172, "end": 194, "text": "(BNC Consortium, 2007)", "ref_id": "BIBREF3" }, { "start": 583, "end": 597, "text": "(Davies, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring rank of word v i", "sec_num": "2.1.1" }, { "text": "An intuitive and simple method for measuring the vocabulary size of learner l j is as follows. First, we randomly sample some words from a large vocabulary sample of the target language. Second, we test whether learner l j knows each of the sampled words and identify the ratio of words known to the learner. Third, we estimate the learner's vocabulary size as the ratio \u00d7 the number of correctly answered questions. This is how the Vocabulary Size Test (Beglar and Nation, 2007) works. Using the frequency ranking of 20, 000 words from the BNC corpus, the words are first split into 20 levels, with each level consisting of 1, 000 words. It is assumed that the 1, 000 words grouped in the same level have similar difficulty. Then, from the 1, 000 words at each level, 5 words are carefully sampled and a vocabulary test is built that consists of 100 words in total. Finally, the number of words that learner l j correctly answered \u00d7 200 is estimated to be the vocabulary size of learner l j . This simple method was later validated by a study from another independent group (Beglar, 2010) and is widely accepted.", "cite_spans": [ { "start": 454, "end": 479, "text": "(Beglar and Nation, 2007)", "ref_id": "BIBREF1" }, { "start": 1075, "end": 1089, "text": "(Beglar, 2010)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring the vocabulary size of learner l j", "sec_num": "2.1.2" }, { "text": "Examples of the Vocabulary Size Test are publicly available (Nation, 2007) . Each question asks learners taking the test to choose the correct answer by selecting one of the four offered options that has the same meaning as one of the underlined words in the question. It should be noted that, in the Vocabulary Size Test, each word is placed in a sentence to disambiguate the usage of each word and each option can directly be replaced with the underlined part without the need to grammatically rewrite the sentence, e.g., for singular/plural differences. Although a typical criticism of vocabulary tests relates to the fact that they do not take contexts into account, each question in the Vocabulary Size Test is specifically designed to account for such criticism by asking the meaning of a word within a sentence. ", "cite_spans": [ { "start": 60, "end": 74, "text": "(Nation, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring the vocabulary size of learner l j", "sec_num": "2.1.2" }, { "text": "The following notations are used. We have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "J learners {l 1 , l 2 , . . . , l j , . . . , l J } and I words {v 1 , v 2 , . . . , v i , . . . , v I }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "j is the index of the learners and i is the index of the words. When there is no ambiguity, we denote word v i as word i and learner l j as learner j, for the sake of simplicity. Let K i be the number of occurrences of word v i . While we do not use in our experiments, for generality, we explicitly write the index for each of the occurrences, i.e., k. Let u i,k be the k-th occurrence of word v i in the text. Let b j be the ability of learner j and let d i,k be the difficulty of the k-th occurrence of word v i . A dichotomous decision using a neural networkbased formulation is typically modeled using a probabilistic formulation. Let y j,i,k be a binary random variable that takes 1 if learner l j knows the k-th occurrence of word v i , otherwise it takes 0. Subsequently, it is typical to use a function that maps a real number to the [0, 1] range so that the real number can be interpreted as a probability. To this end, typically \u03c3 is the logistic sigmoid function, i.e., \u03c3(x) = 1 exp(\u2212x)+1 is used. Then, the probability that learner l j knows the k-th occurrence of word v i , namely, u i,k , can be modeled as in Eq. 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y i,k,j = 1|u i,k , l j ) = \u03c3(a(b j \u2212 d i,k ))", "eq_num": "(2)" } ], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "Qualitative characteristics of Eq. 2 are explained as follows. Let \u03b8 = a(b j \u2212 d i,k ). The logistic sigmoid function maps an arbitrary real number to the [0, 1] range and makes it possible to interpret the real number as a probability. Here, \u03b8 is mapped to the range. As \u03b8 increases, the larger the probability becomes. We can see that a > 0 is the parameter that determines the steepness of the slope. A large a results in a steep slope. When a is large enough, 4.0 for example, numerically, the function is very close to the identity function that returns 0 if \u03b8 < 0 and 1 if \u03b8 \u2265 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "Probability in a dichotomous classification is most ambiguous when it takes 0.5. By focusing on the point the vertical line takes 0.5, we can see that the sign of b j \u2212 d i,k determines whether or not the probability is larger than 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Formulation", "sec_num": "3" }, { "text": "These characteristics of Eq. 2 enable it to express the decision function employed in the previous vocabulary size-based decision function Eq. 1 as its special case. Let us consider the case when a is large and the curve is very steep, say a = 10, for example. Then, by setting b j = s j and d i,k = r i for all k for word v i , the decision about whether learner j knows the k-th occurrence of word v i in Eq. 1 is virtually identical to that of Eq. 2. In this manner, the previous vocabulary size-based decision functions for whether learner l j knows word v i in applied linguistics can be converted to a neural network-based classifier and vice versa. We can see that there exists a freedom in the parameters. In the above example, we can achieve the same setting by setting b j = 0.1s j , d i,k = 0.1r i and a = 100. In this way, the same vocabulary size classification can be achieved by different parameter values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary size-based classification as neural classification", "sec_num": "3.1" }, { "text": "This freedom in terms of parameters is the key for conversion: by setting an appropriate a, we can convert neural classifier parameters as each learner's vocabulary size and the rank of each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary size-based classification as neural classification", "sec_num": "3.1" }, { "text": "While b j and d i,k are parameters, we rewrite them using one-hot vectors that are widely used to describe neural network-based models. Let us introduce two types of feature functions: \u03c6 l and \u03c6 v . The former returns the feature vector of learner l j , and the latter returns the feature vector of the k-th occurrence of word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "v i , u i,k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "Then, the ability and difficulty parameters of Eq. 2 can be written as the inner product of a weight vector and a feature vector. Let us introduce w l as the weight vector for \u03c6 l . Let h be a function that returns the one-hot representation of the augment. We write h l (l j ) to denote a function that returns J-dimensional one-hot vector, where only the j-th element is 1 while the other elements are 0. Then, we can rewrite b j as the inner product of the weight vector and the one-hot vector as b j = w l h l (l j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "In the same way, d i,k can be rewritten as the inner product of its weight vector and feature vector. Being reminded that K i denotes the number of occurrences of word v i , we consider a very long", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "I i=1 K i -dimensional one-hot vector h v (u i,k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": ", where only one element that corresponds to the k-th element of word v i is 1 and all other elements are 0. Then, by introducing a weight vector w v that has the same dimension with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "h v (u i,k ), we can rewrite d i,k as d i,k = w v h v (u i,k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": ". Using these expressions, Eq. 2 can be illustrated using a typical neural network illustration as in Fig. 2 .", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 108, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "Overall, the equation using one-hot vector representation can be described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y i,k,j = 1|u i,k , l j ) = \u03c3(a(w l h l (l j ) \u2212 w v h v (u i,k )))", "eq_num": "(3)" } ], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "3.3 Weights as learner vocabulary sizes and word frequency ranks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "Eq. 3 provides us with a hint to convert neural classifier weights into vocabulary sizes and word frequency rankings. To this end, we can do the following. First, we use Eq. 3 to estimate parameters: a, w l , and w v . Typically, for a binary classification setting using the logistic sigmoid function, crossentropy loss is chosen as the loss function. We use L(a, w l , w v ) to denote the sum of the crossentropy loss function for each of the following: all data, all learners, and all occurrences of all words. From a, w l and w v , we can estimate the frequency rank of word v i as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "aw v h v (u i,k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "Hence, by comparing the estimate with the observed ranking value r i of word v i , we can also tune all parameters. We can simply employ R(a,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "w v ) = I i=1 K i k=1 ||aw v h v (u i,k ) \u2212 r i || 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "for a loss function that measures how distant the estimated rank and the observed rank are. Of course, we can compare aw l h l (l j ) and s j , the observed vocabulary size of learner l j . However, since the observed vocabulary size of each learner is usually much more inaccurate than the ranking of a word, we do not use this term. As ranks usually take large values but never larger than 1, we can use the logarithm of the rank of word v i for r i instead of its raw values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewriting parameters", "sec_num": "3.2" }, { "text": "Practically, it is important to note that the one-hot vector h v (u i,k ) in L and R functions can be replaced with any feature vector of u i,k or with the k-th occurrence of word v i . In our experiments, we simply used this replacement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "3.4" }, { "text": "We propose the following minimization problem that simultaneously tunes both parameters. We let the parameter \u03b3 \u2208 [0, 1] be the parameter that tunes the two loss functions, namely, L and R. Note that, as the optimal value of a is different for term L and for term R, we modeled the two terms separately: a 1 and a 2 , respectively. Since most of Eq. 4 consists of continuous functions, then Eq. 4 can easily be optimized as a neural classifier using a typical deep learning framework, such as PyTorch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "3.4" }, { "text": "min a 1 ,a 2 ,w l ,wv \u03b3L(a 1 , w l , w v ) + (1 \u2212 \u03b3)R(a 2 , w v ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "3.4" }, { "text": "For the input, we prepare the vocabulary test results of J learners, the vocabulary feature function h, and the vocabulary ranking r i . By preparing these data for input, we can train the model through estimating the w parameters by minimizing Eq. 4. The tuning of the \u03b3 value can be conducted using validation data that are disjointed from both the training and test data. Or, \u03b3 can also be tuned by jointly minimizing \u03b3 with other parameters in Eq. 4. Finally, in the test phase, using the trained parameter a 1 and w l -we can estimate learner l j 's vocabulary size as a 1 w l h l (l j ). Using the trained parameter a 2 , w v , we can estimate the rank of the first occurrence of a new word v i , which did not appear in the training data, as a 2 w v h v (u i,1 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Model", "sec_num": "3.4" }, { "text": "To evaluate Eq. 4, we need a real dataset that covers both vocabulary size and reading comprehension tests, assuming that the text coverage hypothesis of 98% holds true. To our knowledge, there is no such dataset widely available. There are certain existing vocabulary test result datasets, such as (Ehara, 2018) , as well as many reading comprehension test result datasets -however; we could not find a dataset in which a second-language learner subject is asked to provide both vocabulary size and reading comprehension test results.", "cite_spans": [ { "start": 299, "end": 312, "text": "(Ehara, 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset 4.1 Description", "sec_num": "4" }, { "text": "To this end, this paper provides such a dataset. Following (Ehara, 2018) , we used the Lancers crowdsourcing service to collect 55 vocabulary test results as well as answers to 1 long and 1 short reading comprehension question from 100 learners. We paid around 5 USD for each participant. In comparison to the dataset by (Ehara, 2018) , the number of vocabulary test questions was reduced so that subjects would have enough time to solve the reading comprehension test. For the vocabulary test part, we used the Vocabulary Size Test (Beglar and Nation, 2007) . The reading comprehension questions were taken from the sample set of the questions in the Appendix section in (Laufer and Ravenhorst-Kalovski, 2010 ). The correct options for these questions are on a website that can also be reached from the description of (Laufer and Ravenhorst-Kalovski, 2010) 1 .", "cite_spans": [ { "start": 59, "end": 72, "text": "(Ehara, 2018)", "ref_id": "BIBREF5" }, { "start": 321, "end": 334, "text": "(Ehara, 2018)", "ref_id": "BIBREF5" }, { "start": 533, "end": 558, "text": "(Beglar and Nation, 2007)", "ref_id": "BIBREF1" }, { "start": 672, "end": 709, "text": "(Laufer and Ravenhorst-Kalovski, 2010", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset 4.1 Description", "sec_num": "4" }, { "text": "In the same manner as (Ehara, 2018) , all participants were required to have ever taken the Test of English for International Communication (TOEIC) test provided by English Testing Services (ETS) and to write scores on a self-report basis. This requirement filters out learners who have never studied English seriously but try to participate for economical merits.", "cite_spans": [ { "start": 22, "end": 35, "text": "(Ehara, 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset 4.1 Description", "sec_num": "4" }, { "text": "In the dataset, each line describes all the responses from a learner. The first columns, which contain the term TOEIC in their headings, provide TOEIC scores and dates. Then, the 55 vocabulary testing questions follow. The columns that start with \"l\" denote the responses on the long reading 1 For more detailed information for the dataset, refer to http://yoehara.com/ vocabulary-prediction/. comprehension test and those with \"s\" denote the responses on the short one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset 4.1 Description", "sec_num": "4" }, { "text": "Finally, we show preliminary experiments by using our dataset. We used 33 words from the dataset, i.e., 3, 300 responses. Hereafter, we simply denote the logarithm of frequency ranks in a descending order as \"LFR\". For r i , we used the LFR of the BNC corpus (BNC Consortium, 2007) . For features of h v , we used the logarithm of the frequency of the COCA corpus (Davies, 2009) . We obtained parameters by optimizing the minimization parameters Eq. 4. Then, for 100 words disjoint from the 33 training words, we plotted the estimated LFR values against the gold LFR values in Fig. 3 . We can easily see that they have a good correlation. The Spearman's correlation coefficient for Fig. 3 was 0.70, which can be construed as a strong correlation (Taylor, 1990) .", "cite_spans": [ { "start": 259, "end": 281, "text": "(BNC Consortium, 2007)", "ref_id": "BIBREF3" }, { "start": 364, "end": 378, "text": "(Davies, 2009)", "ref_id": "BIBREF4" }, { "start": 746, "end": 760, "text": "(Taylor, 1990)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 577, "end": 583, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 682, "end": 688, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4.2" }, { "text": "In this paper, we theoretically showed that previous vocabulary size-based classifiers can be seen as a special case of a neural classifier. We also built a dataset necessary for this evaluation and made it publicly available in the form of an attached dataset. Future work include more detailed experiments on language learners' second language vocabularies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [ { "text": "This work was supported by JST ACT-I Grant Number JPMJPR18U8 and JSPS KAKENHI Grant Number JP18K18118. We used the AI Bridging Cloud Infrastructure (ABCI) by the National Institute of Advanced Industrial Science and Technology (AIST), Japan. We thank anonymous reviewers for their insightful and constructive comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Rasch-based validation of the Vocabulary Size Test. Language Testing", "authors": [ { "first": "David", "middle": [], "last": "Beglar", "suffix": "" } ], "year": 2010, "venue": "", "volume": "27", "issue": "", "pages": "101--118", "other_ids": { "DOI": [ "10.1177/0265532209340194" ] }, "num": null, "urls": [], "raw_text": "David Beglar. 2010. A Rasch-based validation of the Vocabulary Size Test. Language Testing, 27(1):101- 118.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A vocabulary size test. The Language Teacher", "authors": [ { "first": "David", "middle": [], "last": "Beglar", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Nation", "suffix": "" } ], "year": 2007, "venue": "", "volume": "31", "issue": "", "pages": "9--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Beglar and Paul Nation. 2007. A vocabulary size test. The Language Teacher, 31(7):9-13.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Predicting the Difficulty of Language Proficiency Tests", "authors": [ { "first": "Lisa", "middle": [], "last": "Beinborn", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "517--530", "other_ids": { "DOI": [ "10.1162/tacl_a_00200" ] }, "num": null, "urls": [], "raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2014. Predicting the Difficulty of Language Pro- ficiency Tests. Transactions of the Association for Computational Linguistics, 2:517-530.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The British National Corpus", "authors": [ { "first": "Bnc", "middle": [], "last": "The", "suffix": "" }, { "first": "", "middle": [], "last": "Consortium", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The BNC Consortium. 2007. The British National Cor- pus, version 3 (BNC XML Edition).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The 385+ million word corpus of contemporary american english (1990-2008+): Design, architecture, and linguistic insights", "authors": [ { "first": "Mark", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2009, "venue": "International journal of corpus linguistics", "volume": "14", "issue": "2", "pages": "159--190", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Davies. 2009. The 385+ million word corpus of contemporary american english (1990-2008+): De- sign, architecture, and linguistic insights. Interna- tional journal of corpus linguistics, 14(2):159-190.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Building an English Vocabulary Knowledge Dataset of Japanese English-as-a-Second-Language Learners Using Crowdsourcing", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" } ], "year": 2018, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yo Ehara. 2018. Building an English Vocabu- lary Knowledge Dataset of Japanese English-as-a- Second-Language Learners Using Crowdsourcing. In Proc. of LREC.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural rasch model: How word embeddings affect to word difficulty?", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" } ], "year": 2019, "venue": "Proc. of the 16th International Conference of the Pacific Association for Computational Linguistics (PACLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yo Ehara. 2019. Neural rasch model: How word em- beddings affect to word difficulty? In Proc. of the 16th International Conference of the Pacific Associ- ation for Computational Linguistics (PACLING).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assessing Translation Ability through Vocabulary Ability Assessment", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Yukino", "middle": [], "last": "Baba", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2016, "venue": "Proc. of IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yo Ehara, Yukino Baba, Masao Utiyama, and Ei- ichiro Sumita. 2016. Assessing Translation Ability through Vocabulary Ability Assessment. In Proc. of IJCAI.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Formalizing Word Sampling for Vocabulary Prediction as Graph-based Active Learning", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Hidekazu", "middle": [], "last": "Oiwa", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2014, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1374--1384", "other_ids": { "DOI": [ "10.3115/v1/D14-1143" ] }, "num": null, "urls": [], "raw_text": "Yo Ehara, Yusuke Miyao, Hidekazu Oiwa, Issei Sato, and Hiroshi Nakagawa. 2014. Formalizing Word Sampling for Vocabulary Prediction as Graph-based Active Learning. In Proc. of EMNLP, pages 1374- 1384.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mining Words in the Minds of Second Language Learners: Learner-Specific Word Difficulty", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hidekazu", "middle": [], "last": "Oiwa", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "799--814", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yo Ehara, Issei Sato, Hidekazu Oiwa, and Hiroshi Nak- agawa. 2012. Mining Words in the Minds of Sec- ond Language Learners: Learner-Specific Word Dif- ficulty. In Proceedings of COLING 2012, pages 799-814, Mumbai, India. The COLING 2012 Orga- nizing Committee.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Mining words in the minds of second language learners for learner-specific word difficulty", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Issei", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2018, "venue": "Journal of Information Processing", "volume": "26", "issue": "", "pages": "267--275", "other_ids": { "DOI": [ "10.2197/ipsjjip.26.267" ] }, "num": null, "urls": [], "raw_text": "Yo Ehara, Issei Sato, Hidekazu Oiwa, and Hiroshi Nak- agawa. 2018. Mining words in the minds of second language learners for learner-specific word difficulty. Journal of Information Processing, 26:267-275.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Personalized Reading Support for Second-language Web Documents", "authors": [ { "first": "Yo", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "Nobuyuki", "middle": [], "last": "Shimizu", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Ninomiya", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2438653.2438666" ] }, "num": null, "urls": [], "raw_text": "Yo Ehara, Nobuyuki Shimizu, Takashi Ninomiya, and Hiroshi Nakagawa. 2013. Personalized Read- ing Support for Second-language Web Documents.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic Question Generation for Language Testing and its Evaluation Criteria", "authors": [ { "first": "Ayako", "middle": [], "last": "Hoshino", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ayako Hoshino. 2009. Automatic Question Generation for Language Testing and its Evaluation Criteria. Ph.D. thesis, Graduate School of Interdisciplinary Information Studies, The University of Tokyo.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lexical Threshold Revisited: Lexical Text Coverage, Learners' Vocabulary Size and Reading Comprehension", "authors": [ { "first": "Batia", "middle": [], "last": "Laufer", "suffix": "" }, { "first": "C", "middle": [], "last": "Geke", "suffix": "" }, { "first": "", "middle": [], "last": "Ravenhorst-Kalovski", "suffix": "" } ], "year": 2010, "venue": "", "volume": "22", "issue": "", "pages": "15--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Batia Laufer and Geke C. Ravenhorst-Kalovski. 2010. Lexical Threshold Revisited: Lexical Text Coverage, Learners' Vocabulary Size and Reading Comprehen- sion. Reading in a Foreign Language, 22(1):15-30.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Personalizing lexical simplification", "authors": [ { "first": "John", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Chak Yan", "middle": [], "last": "Yeung", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "224--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lee and Chak Yan Yeung. 2018. Personalizing lexical simplification. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 224-232, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "How Large a Vocabulary is Needed For Reading and Listening? Canadian Modern Language Review", "authors": [], "year": 2006, "venue": "", "volume": "63", "issue": "", "pages": "59--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Nation. 2006. How Large a Vocabulary is Needed For Reading and Listening? Canadian Modern Lan- guage Review, 63(1):59-82.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Vocabulary size test", "authors": [], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Nation. 2007. Vocabulary size test. https: //www.wgtn.ac.nz/lals/about/staff/ paul-nation#vocab-tests.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1669--1679", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2016. Collecting and Exploring Everyday Language for Predicting Psycholinguistic Properties of Words. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers, pages 1669-1679, Osaka, Japan. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Lexical Simplification with Neural Ranking", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "34--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2017. Lexical Sim- plification with Neural Ranking. In Proc. of EACL, pages 34-40, Valencia, Spain.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Simple PPDB: A Paraphrase Database for Simplification", "authors": [ { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "143--148", "other_ids": { "DOI": [ "10.18653/v1/P16-2024" ] }, "num": null, "urls": [], "raw_text": "Ellie Pavlick and Chris Callison-Burch. 2016. Simple PPDB: A Paraphrase Database for Simplification. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 143-148, Berlin, Germany. As- sociation for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners", "authors": [ { "first": "Keisuke", "middle": [], "last": "Sakaguchi", "suffix": "" }, { "first": "Yuki", "middle": [], "last": "Arase", "suffix": "" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" } ], "year": 2013, "venue": "Proc. of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "238--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keisuke Sakaguchi, Yuki Arase, and Mamoru Komachi. 2013. Discriminative Approach to Fill-in-the-Blank Quiz Generation for Language Learners. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 238-242, Sofia, Bulgaria. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Interpretation of the correlation coefficient: a basic review", "authors": [ { "first": "Richard", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 1990, "venue": "Journal of diagnostic medical sonography", "volume": "6", "issue": "1", "pages": "35--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Taylor. 1990. Interpretation of the correlation coefficient: a basic review. Journal of diagnostic medical sonography, 6(1):35-39.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Probability against \u03b8 when changing the value of a.", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Neural network illustration of a vocabulary size-based prediction.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Estimated LFRs against Gold LFRs.", "num": null } } } }