{ "paper_id": "C12-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:22:50.787336Z" }, "title": "Mining words in the minds of second language learners: learner-specific word difficulty", "authors": [ { "first": "Yo", "middle": [], "last": "Ehar", "suffix": "", "affiliation": { "laboratory": "", "institution": "the University of Tokyo", "location": { "settlement": "Tokyo", "country": "Japan (" } }, "email": "" }, { "first": "Issei", "middle": [], "last": "Sat", "suffix": "", "affiliation": { "laboratory": "", "institution": "the University of Tokyo", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "" }, { "first": "H", "middle": [], "last": "Idekazu Oiwa", "suffix": "", "affiliation": { "laboratory": "", "institution": "the University of Tokyo", "location": { "settlement": "Tokyo", "country": "Japan (" } }, "email": "" }, { "first": "H Ir", "middle": [], "last": "Oshi N Akagawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "the University of Tokyo", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While there have been many studies on measuring the size of learners' vocabulary or the vocabulary they should learn, there have been few studies on what kind of words learners actually know. Therefore, we investigated theoretically and practically important models for predicting second language learners' vocabulary and propose another model for this vocabulary prediction task. With the current models, the same word difficulty measure is shared by all learners. This is unrealistic because some learners have special interests. A learner interested in music may know special music-related terms regardless of their difficulty. To solve this problem, our model can define a learner-specific word difficulty measure. Our model is also an extension of these current models in the sense that these models are special cases of our model. In a qualitative evaluation, we defined a measure for how learner-specific a word is. Interestingly, the word with the highest learner-specificity was \"twitter\". Although \"twitter\" is a difficult English word, some low-ability learners presumably knew this word through the famous micro-blogging service. Our qualitative evaluation successfully extracted such interesting and suggestive examples. Our model achieved an accuracy competitive with the current models.", "pdf_parse": { "paper_id": "C12-1049", "_pdf_hash": "", "abstract": [ { "text": "While there have been many studies on measuring the size of learners' vocabulary or the vocabulary they should learn, there have been few studies on what kind of words learners actually know. Therefore, we investigated theoretically and practically important models for predicting second language learners' vocabulary and propose another model for this vocabulary prediction task. With the current models, the same word difficulty measure is shared by all learners. This is unrealistic because some learners have special interests. A learner interested in music may know special music-related terms regardless of their difficulty. To solve this problem, our model can define a learner-specific word difficulty measure. Our model is also an extension of these current models in the sense that these models are special cases of our model. In a qualitative evaluation, we defined a measure for how learner-specific a word is. Interestingly, the word with the highest learner-specificity was \"twitter\". Although \"twitter\" is a difficult English word, some low-ability learners presumably knew this word through the famous micro-blogging service. Our qualitative evaluation successfully extracted such interesting and suggestive examples. Our model achieved an accuracy competitive with the current models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When learning second languages, vocabulary knowledge is as important as, or sometimes more important, than grammar. The importance of vocabulary knowledge has been a main focus in the last decade in the field of second language acquisition (SLA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Studies regarding vocabulary knowledge of second language learners have been mainly focusing on two major tasks: devising methods for measuring the size of the second language vocabulary of learners for testing purposes (Schmitt et al., 2001; Laufer and Nation, 1999; Nation, 1990) and determining the words that the learners should learn (Nation, 2006) . However, there have been few studies on what kind of words learners actually know. This is the basic research question for our research.", "cite_spans": [ { "start": 220, "end": 242, "text": "(Schmitt et al., 2001;", "ref_id": "BIBREF23" }, { "start": 243, "end": 267, "text": "Laufer and Nation, 1999;", "ref_id": "BIBREF16" }, { "start": 268, "end": 281, "text": "Nation, 1990)", "ref_id": "BIBREF19" }, { "start": 339, "end": 353, "text": "(Nation, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To study what words second language learners actually know, we focused on the vocabulary prediction task. In this task, we aim to build a model that predicts, given a word and a learner, whether or not the learner knows the word. As far as we know, Ehara et al. (2010) is the only study that dealt directly with the vocabulary prediction task. They applied this task to a reading support user interface for second language learners that automatically identifies the words unfamiliar to the learner on a Web page.", "cite_spans": [ { "start": 249, "end": 268, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The vocabulary prediction task is important for both theory and application. From the theoretical point of view, this task is interesting in that it mines the words second language learners know and creates a model on what kinds of words learners actually know. From the model, we can interpret the patterns or tendency of the learners' process of memorizing second language words. Studying the vocabulary prediction task may also lead to determining if learners actually learn words that SLA experts recommend.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From the application point of view, this task can be used in user-adaptation for reading and writing applications to support second language learners. Ehara et al. (2010) 's model is of this type. They successfully showed the effectiveness of their system. With the increase in Web-based language learning environments, possible data sources for learners' vocabulary knowledge are also increasing. Studying the vocabulary prediction task can shed light on these data sources, and they can be used to further understand the vocabulary knowledge of second language learners.", "cite_spans": [ { "start": 151, "end": 170, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By using machine learning terminology, the vocabulary prediction task can be categorized as a binary classification task: given a word and a learner, it predicts whether or not the learner knows the word. Therefore, a number of machine learning methods, such as a support vector machine (SVM) for the binary classification task, can be used as predictors. However, to answer our research question, what kind of words learners actually know, we want predictors to be able to do more than just predict. Rather, we want predictors that are practical and useful for analysis. Specifically, we list the following properties we want predictors to have.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "interpretable weight vector Most predictors use weight vectors trained with data. Weight vectors of some models can be interpreted as quantitative measures of word difficulty and learner ability. Interpretable weight vectors are essential for analysis to find the patterns or tendency of learners' process of memorization, and to further understand the basic research question: what kind of words do second language learners actually know?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "out-of-sample Settings in the vocabulary prediction task can be divided into two for handling new words: in-matrix and out-of-sample. The in-matrix setting does NOT support new words, i.e., there is at least one training dataset for all the words appearing in the test data. This can be seen as filling in the blanks of a learner-word matrix. In contrast, the out-of-sample setting support new words, i.e., some or all words in the test data are missing in the training data. To create the training data, we need to ask learners whether or not they know the words. Thus, creation of the training data is very financially costly and burdensome for learners. In a realistic setting, we can ask learners about only a small subset of words, and the predictors usually have to predict all the rest. The out-of-sample setting is more difficult but more realistic than the in-matrix setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "learner-specific word difficulty This is the core beneficial property of the proposed model. Some interpretable weight vectors can determine word difficulty. However, the perceived difficulty of a word differs from learner to learner. For example, a learner interested in music may know music-related words that even high-level learners may not be familiar with. For another example, suppose that normally difficult words are used in the names of well known commercial products and services. In this case, again, low-ability learners may know these words through the product names. Thus, it is preferable for a model to be able to detect this kind of learner specialty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "weight vector is interpretable out-of-sample learner-specific word difficulty Rasch --Ehara et al. 2010-Proposed Table 1 : Properties of models. The proposed model supports all preferred properties. Ordinary binary classifiers only can classify: their weight vectors are not interpretable as word difficulty and learner ability as those of the other models listed here. Table 1 summarizes the models explained in this paper. We can see that only the proposed model supports all the properties. Although ordinary binary classifiers, such as SVMs, can be used for the vocabulary prediction task, their weight vectors cannot be used to determine word difficulty and learner ability that we want for analysis. Thus, we ruled out typical binary classifiers.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 1", "ref_id": null }, { "start": 370, "end": 377, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The structure of this paper is as follows. We first focus on extending the basic interpretable model: the Rasch model (Rasch, 1960; Baker and Kim, 2004) . Although the Rasch model lacks many of the preferred properties, it provides a rough idea for the vocabulary prediction task. To explain why the Rasch model lacks many of these properties, we then introduce the general form of the likelihood of the Rasch model. This generalization provides a way of supporting the preferred properties. Through this generalization, we can derive the Rasch model, the model proposed by Ehara et al. (2010) , and the proposed model. The contributions of this paper are as follows:", "cite_spans": [ { "start": 118, "end": 131, "text": "(Rasch, 1960;", "ref_id": "BIBREF22" }, { "start": 132, "end": 152, "text": "Baker and Kim, 2004)", "ref_id": "BIBREF1" }, { "start": 574, "end": 593, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce the general form of likelihood of the Rasch model that can explain the reason this model lacks the desired properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a model that supports all desired properties using this general form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 In an evaluation, our model successfully detected the specialties of second language learners, which the current models cannot detect. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let U be a set of learners, and V be a set of vocabulary. We denote the number of learners as |U| and the number of words as |V |. A datum can be expressed using the triplet ( y, u, v) .", "cite_spans": [ { "start": 174, "end": 184, "text": "( y, u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem setting", "sec_num": "2" }, { "text": "Here, y \u2208 {0, 1} is the label denoting whether or not learner u knows word v, (1, u, v) means that learner u knows word v, and (0, u, v) means he/she does not know word v. Using these notations, the vocabulary prediction task is defined to predict the label y given (u, v) . We denote a dataset of N data as =", "cite_spans": [ { "start": 78, "end": 87, "text": "(1, u, v)", "ref_id": null }, { "start": 266, "end": 272, "text": "(u, v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem setting", "sec_num": "2" }, { "text": "{( y 1 , u 1 , v 1 ), . . . , ( y N , u N , v N )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem setting", "sec_num": "2" }, { "text": "For simplicity, we assume that for one learner u \u2208 U and word v \u2208 V pair, there exists only one label y. This restriction enables us to depict the data set in a matrix form, as shown in Figure 1 . The rows of the matrix correspond to learners and the columns of the matrix correspond to words. Under this assumption, for one row (learner) and one column (word), there is only one cell; thus, only one label y. With this restriction, N is the number of cells in the matrix.", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Problem setting", "sec_num": "2" }, { "text": "The dataset we used in the evaluation agrees with this restriction; however, we cannot always assume this restriction in a realistic setting. This is the reason we did not directly jump to matrix-based prediction methods such as low-rank approximation using singular value decomposition. For example, in a realistic dataset, such as word-click logs in a reading support system, contradiction and repetition are common. For contradiction, if both (1, u, v) and (0, u, v) appear in the dataset, it may mean these two datasets are unreliable. Repetition of multiple (1, u, v) may mean that learner u is more familiar with word v than just one (1, u, v) . All the models that we explain in the later sections of this paper can handle these cases. Figure 1 explains the in-matrix and out-of-sample settings. The hashed areas denote the training data, and the blank areas denote the test data. In the in-matrix setting, the test data are randomly placed in the matrix.", "cite_spans": [ { "start": 446, "end": 455, "text": "(1, u, v)", "ref_id": null }, { "start": 460, "end": 469, "text": "(0, u, v)", "ref_id": null }, { "start": 640, "end": 649, "text": "(1, u, v)", "ref_id": null } ], "ref_spans": [ { "start": 743, "end": 751, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Problem setting", "sec_num": "2" }, { "text": "Although the vocabulary prediction task is quite novel, there have been a substantial amount of work in SLA about which words a learner should learn first. Many studies recommend learners to learn words according to word frequency in general corpora because word frequency can be used as a rough measure of word difficulty. Of course, the learner does not necessarily learn the words in this recommended order. As stated in the introduction, it is one of our research questions to check if learners actually learn in this order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "Still, we can come up with the idea that the difficulty of words determines the learners' knowledge of second language words. This idea leads to a very simple model of vocabulary prediction shown in Figure 2 . With this model, we predict a learner's vocabulary with the following steps:", "cite_spans": [], "ref_spans": [ { "start": 199, "end": 207, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "1. We rank words according to a measure of word difficulty. 2. We decide the threshold for a learner. 3. Words with greater difficulty than the threshold are predicted to be unfamiliar to the learner, and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "Although this model seems too simple, it is the core idea of the Rasch model, which has been widely used in language testing. Figure 2 : Simple vocabulary prediction model. (a) First, assume there is a difficulty measure that maps each word to a point on the axis of the measure. (b) Second, each learner's ability is also mapped to a point on the same axis. (c) Third, the words with the greatest difficulty to the point designating the learner's ability is predicted to be unfamiliar to the learner, and vice versa.", "cite_spans": [], "ref_spans": [ { "start": 126, "end": 134, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "Given learner u and word v, the Rasch model models the probability of learner u knowing word v as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P y = 1|u, v = \u03c3 a u \u2212 d v ,", "eq_num": "(1)" } ], "section": "Rasch model", "sec_num": "3" }, { "text": "where \u03c3 (t) = 1 + exp (\u2212t) \u22121 denotes the logistic sigmoid function. There are two kinds of parameters to be trained:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "d v the difficulty of word v, a u the ability of learner u.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "In the Rasch model, the subtraction of two parameters a u \u2212 d v in Eq. (1) denotes exactly the same mechanism as the simple vocabulary prediction in Figure 2 . Here, d v maps each word v into a point on the axis, and a u works as a threshold. When P( y = 1|u, v) \u2265 0.5, we can assume learner u knows word v. Due to the logistic sigmoid function, P( y = 1|u, v) \u2265 0.5 holds true if and only if", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 157, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "a u \u2212 d v \u2265 0, that is, a u \u2265 d v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "Therefore, the Rasch model determines that learner u knows all words whose word difficulty d v is lower than the learners' ability a u . Note that not only the ability of learner a u but also the difficulty of word d v is estimated from the data in the Rasch model. The priors for the parameters are usually set as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P a u |\u03b7 a = 0, \u03b7 \u22121 a (\u2200u \u2208 U),", "eq_num": "(2)" } ], "section": "Rasch model", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P d v |\u03b7 d = 0, \u03b7 \u22121 d (\u2200v \u2208 V ),", "eq_num": "(3)" } ], "section": "Rasch model", "sec_num": "3" }, { "text": "where denotes the probability distribution function of the normal distribution. Frequently, the hyper parameters \u03b7 a and \u03b7 d are set as \u03b7 a = \u03b7 d . If \u03b7 a = \u03b7 d , the parameters, d v and a u of the Rasch model can be obtained using a standard log-linear model solver.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "One of the notable problems with the Rasch model is that it does not take into account the out-of-sample setting. That is, it cannot predict words that do not appear in the training set. For example, if there is a new word in a document in a reading support system, we need to re-create the training set with the new word for the system to be able to predict that word as well. This restriction makes the application systems using the vocabulary prediction task impractical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rasch model", "sec_num": "3" }, { "text": "In the previous section, we stated that the Rasch model does work under the out-of-sample setting, which frequently occurs in a realistic setting. This section attempts to locate the fundamental reason the out-of-sample problem arises by generalizing the likelihood of the Rasch model. Let us discuss the difficulty parameter d v of the Rasch model from another perspective. If we define a function as f (v) = d v , we can understand that d v is a function that takes word v as its argument and returns the difficulty of word v. This means that we do not need to allocate the number of variables |V | to determine the difficulty of a word as the Rasch model does. Instead, all that we need is a function that returns word difficulty for given word v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General form of likelihood", "sec_num": "4" }, { "text": "We can further extend f to be the form f (u, v): a function that takes learner u and word v as its argument and returns the difficulty of word v for learner u. By using f (u, v), we can generalize the likelihood function of the Rasch model as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General form of likelihood", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P y = 1|u, v = \u03c3 a u \u2212 f (u, v) .", "eq_num": "(4)" } ], "section": "General form of likelihood", "sec_num": "4" }, { "text": "The Rasch model is a special version of Eq. (4) where we set f (u, v) = d v . We can see that the fundamental cause of the out-of-sample problem in the Rasch model comes from this poorly designed f . There is a 1-to-1 mapping between parameters and words in this design of f . Therefore, if some words are missing in the training set, parameters arise that are not trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General form of likelihood", "sec_num": "4" }, { "text": "Note that Eq. (4) generalizes only the likelihood of the Rasch model. Of course, to fully define a model, we must define priors as well. Moreover, the priors must be designed carefully; otherwise, a model can produce poor results regardless of the design of f .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General form of likelihood", "sec_num": "4" }, { "text": "One may think of extending the learner ability parameter a u to be a function as well. Of course, we can do this extension in theory. However, unlike word difficulty parameters, little information is practically available for learners. Therefore, it is preferable for a model to require as little information from learners as possible. Since the complex design of f may require much information, we kept the learner ability parameter a u simple.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General form of likelihood", "sec_num": "4" }, { "text": "By redesigning f in the general form of likelihood, we can cope with the out-of-sample setting. One way to design f to be able to do this is to set it as f (u, v) = w \u22a4 \u03c6(v). Here, \u03c6 : V \u2192 K is a feature function. Given word v, it returns a feature vector for it. Let K be the dimension of the feature space. Typically, frequencies from large corpora can be used as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "Even if there is a new word in the test data and there are words in the training data that share the same features with the new word, the word difficulty of the new word can be obtained by calculating w \u22a4 \u03c6(v). The full form of the likelihood becomes the following.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P y = 1|u, v; w = \u03c3 a u \u2212 w \u22a4 \u03c6 (v) .", "eq_num": "(5)" } ], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "Priors for the likelihood Eq. 5are set as follows. We call this model the shared difficulty model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P a u |\u03b7 a = 0, \u03b7 \u22121 a (\u2200u \u2208 U),", "eq_num": "(6)" } ], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P w|\u03b7 w = 0, \u03b7 \u22121 w I ,", "eq_num": "(7)" } ], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "where I denotes the K \u00d7 K-sized identity matrix. If we set \u03b7 w = \u03b7 a , this model reduces to a simple l2-norm-regularized logistic regression as Ehara et al. (2010) used. However, they did not mention the out-of-sample setting or the general likelihood.", "cite_spans": [ { "start": 145, "end": 164, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Shared difficulty model", "sec_num": "4.1" }, { "text": "One problem in both the Rasch and shared difficulty models is that all learners share a single word difficulty measure. This means that the same ranking of a word is shared by all the learners, e.g., the word \"tremble\" is more difficult than worship according to all the learners. Thus, the Rasch and shared difficulty models cannot take into account a leaner's specialty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "In reality, it is common that even low-ability learners know difficult words with the help of their interests in a specific topic. For example, learners who are interested in music are likely to have a large vocabulary of music-related words in second languages regardless of the difficulty of the words. Modeling this kind of learner specialty is essential in designing user-adaptive supports for second language learners. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "Rasch f (u, v) = d v P a u |\u03b7 a = 0, \u03b7 \u22121 a P d v |\u03b7 d = 0, \u03b7 \u22121 d - Shared diffi- culty model (Ehara et al., 2010) f (u, v) = w \u22a4 \u03c6(v) P a u |\u03b7 a = 0, \u03b7 \u22121 a P w|\u03b7 w = 0, \u03b7 \u22121 w I Reduced to Rasch model if \u03c6(v) is 1-dimensional and \u03c6(v) = 1. Proposed f (u, v) = w \u22a4 u \u03c6(v) P a u |\u03b7 a = 0, \u03b7 \u22121 a P w 0 = 0, \u03b7 \u22121 w I P w u |w 0 = w 0 , \u03bb \u22121 I", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "Reduced to the shared difficulty model if we set w u = w 0 (\u2200u \u2208 U). Table 2 : Summary of models explained so far. The Rasch model is a special case of the shared difficulty model, and the shared difficulty model is a special case of the proposed model. right side of the axis, learner thresholds are plotted according to the learners' ability parameters a u . The predictor determines that a learner does not know all the words above his/her threshold. In Figure 3 (a) , all three learners share the same word difficulty. Therefore, the model cannot represent a learner who knows the word \"worship\" but does not know the word \"tremble\". This problem can be solved by introducing a difficulty axis for every learner as Figure 3 (b) does. In (b), \"learner 1\" is modeled as knowing the word \"worship\" but not the word \"tremble\", while \"learner 2\" is modeled as knowing the word \"tremble\" but not the word \"worship\". This kind of flexible modeling is impossible in the Rasch and shared difficulty models. With the general model explained above, we can easily explain the fundamental cause of this problem: in the current models, f (u, v) depends only on v, and does not depend on u. Therefore, tackling this problem is simple: let f (u, v) depend on u as well. In the proposed model, we define f (u, v) = w \u22a4 u \u03c6 (v). The full form of the likelihood is shown as follows.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 2", "ref_id": null }, { "start": 457, "end": 469, "text": "Figure 3 (a)", "ref_id": "FIGREF1" }, { "start": 719, "end": 727, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P y = 1|u, v; w u = \u03c3 a u \u2212 w \u22a4 u \u03c6 (v) .", "eq_num": "(8)" } ], "section": "Proposed model", "sec_num": "5" }, { "text": "This likelihood has far more parameters to be trained than the current models. Since the dimension size of the feature space is K, w u is a K-dimension vector. Since we have |U| learners, we have K|U| parameters to tune in total. Priors must be carefully designed to tune this large number of parameters. We designed the priors as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P a u |\u03b7 a = 0, \u03b7 \u22121 a (\u2200u \u2208 U),", "eq_num": "(9)" } ], "section": "Proposed model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P w 0 = 0, \u03b7 \u22121 w I ,", "eq_num": "(10)" } ], "section": "Proposed model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P w u |w 0 = w 0 , \u03bb \u22121 I .", "eq_num": "(11)" } ], "section": "Proposed model", "sec_num": "5" }, { "text": "Eq. (11) is an important prior that does not appear in the current models. This prior makes w u close to w 0 and makes w u dependent on each other. The larger the \u03bb, the stronger this effect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "Note that both the shared difficulty model discussed by Ehara et al. (2010) and the Rasch model are actually special cases of the proposed model; we extended the Rasch and shared difficulty models into the proposed model. The constraints to reduce the proposed model into these models are summarized in Table 2 .", "cite_spans": [ { "start": 56, "end": 75, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Proposed model", "sec_num": "5" }, { "text": "This section describes methods for estimating the model parameters. We use maximum-aposteriori (MAP) estimation for all three models: Rasch, shared difficulty, and proposed. As we explained, the shared difficulty and Rasch models are special cases of the proposed model. Therefore, we first explain the optimization of the proposed model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "The negative log of the negative log posterior of the proposed model takes the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l W, a, w 0 = N i=1 nl l y i , u i , v i + \u03bb 2 u\u2208U w u \u2212 w 0 2 (12) + \u03b7 w 2 w 0 2 + \u03b7 a 2 u\u2208U a 2 u .", "eq_num": "(13)" } ], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "We define the negative log likelihood function of the proposed model as nl l y, u, v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "def = log 1 + exp \u2212 y a u \u2212 w \u22a4 u \u03c6 (v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": ". We define W and a as follows for concise notation: W = {w u |\u2200u \u2208 U}, a = {a u |\u2200u \u2208 U}. This function l W, a, w 0 is convex (Kajino et al., 2012) over all the variables W, a, w 0 . Thus, the MAP model parameters\u0174,\u00e2, and\u0175 0 can be estimated by minimizing l W, a, w 0 w.r.t. W, a, and w 0 .", "cite_spans": [ { "start": 127, "end": 148, "text": "(Kajino et al., 2012)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "Based on Kajino et al. 2012, we minimize l W, a, w 0 iteratively as follows: minimizing w.r.t. W, a We fix w 0 and minimize l W, a, w 0 w.r.t. W and a. Kajino et al. (2012) used the Newton method for this optimization. Using the Newton method requires O(K 2 ) memory, where K is the dimension of w u and w 0 . This is problematic when K increases. To tackle this problem, we used L-BFGS (Liu and Nocedal, 1989) , which requires only O(K) memory, for this optimization instead. Specifically, we used the library liblbfgs (Okazaki, 2007) . minimizing w.r.t. w 0 We fix W and a to minimize l w.r.t. w 0 . This minimization can be achieved analytically as follows:", "cite_spans": [ { "start": 152, "end": 172, "text": "Kajino et al. (2012)", "ref_id": "BIBREF13" }, { "start": 387, "end": 410, "text": "(Liu and Nocedal, 1989)", "ref_id": "BIBREF17" }, { "start": 520, "end": 535, "text": "(Okazaki, 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w 0 = \u03bb \u03b7 w + |U|\u03bb u\u2208U w u .", "eq_num": "(14)" } ], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "We repeated these two minimizations iteratively until convergence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "Both the Rasch and shared difficulty models are special cases of the proposed model when w u = w 0 (\u2200u \u2208 U). This means that the second minimization is unnecessary for the Rasch and shared difficulty models. Thus, the parameters, i.e., the weight vector, of the Rasch and shared difficulty models can be obtained by simply performing the first minimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimation of model parameters", "sec_num": "6" }, { "text": "We used the same dataset as Ehara et al. (2010) (Brants and Franz, 2006) Mixed 1,024,948 mil. Huge, but not general Table 3 : Feature sources.", "cite_spans": [ { "start": 28, "end": 47, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" }, { "start": 48, "end": 72, "text": "(Brants and Franz, 2006)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "7.1" }, { "text": "This dataset was designed to be quite exhaustive. Every learner was handed a randomly sorted questionnaire comprising 12, 000 words and asked to answer how well he/she knew the words in the questionnaire based on a five-point scale. We regarded level 5 as only y = 1; the learner knows the word. Otherwise we regarded y = 0; the learner does not know the word. Out of the 12, 000 words, 1 word was a pseudo-word, i.e., it looks like an English word but actually is not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "7.1" }, { "text": "Fifteen learners were paid, and 1 learner was not. Since we found that the unpaid learner's data were too noisy, we used only the data of the 15 paid learners. We had |V | = 11, 999 words \u00d7 |U| = 15 learners; 179, 985 data points in total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "7.1" }, { "text": "The negative log of the 1-gram probabilities of each word in each corpus is used as features for training. The collected corpora for feature sources are compiled in Table 3 . Ehara et al. (2010) used one large corpus, Google-1gram. However, from the perspective of SLA, it is typically not justified because it is not a general corpus; thus, its frequencies could be biased. To avoid being biased, we collected many general corpora and used them as features.", "cite_spans": [ { "start": 175, "end": 194, "text": "Ehara et al. (2010)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 165, "end": 172, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "7.1" }, { "text": "When training, hyper parameters were chosen by grid search and 5-fold cross validation within the training set. The set of hyper parameters that performed best in this cross validation was selected. Then, we trained the model with all the training sets using the selected hyper parameters. We then applied the model to the test set to obtain the results. For the Rasch and shared difficulty models, each hyper parameter, \u03b7 d , \u03b7 a , and \u03b7 w , was chosen by grid search from {0.01, 2 \u22123 , 2 \u22122 , 2 \u22121 , 1.0, 2 1 , 2 2 , 2 4 }. For the proposed model, each hyper parameter, \u03b7 a , \u03b7 w , and \u03bb, was chosen by grid search from {2 \u22122 , 2 \u22121 , 1.0, 2 1 , 2 2 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "7.1" }, { "text": "Unlike the current models, the proposed model was designed to support learner-specific word difficulty. It is interesting to see which words are the most learner-specific.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "For a measure of learner-specificity, we introduce the variance of learner-specific word difficulty. In the proposed model, the learner-specific difficulty", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "f (u, v) of word v for learner u is defined as f (u, v) = w \u22a4 u \u03c6 (v).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "Unlike the current models that assign single word difficulty for all learners, we can naturally define the variance of word difficulty over learners. Given the set of estimated weight vectors for all |U| learners, {\u0175 u | u \u2208 U}, for word v \u2208 V , we define M ean (v) and Var(v) as follows:", "cite_spans": [ { "start": 262, "end": 265, "text": "(v)", "ref_id": null }, { "start": 270, "end": 276, "text": "Var(v)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M ean (v) def = 1 |U| u\u2208U f (u, v) = 1 |U| u\u2208U\u0175 \u22a4 u \u03c6 (v) ,", "eq_num": "(15)" } ], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "Var Table 4 lists the words with largest variances Var(v) in descending order. Var(v) increases when some low-ability learners know the words and some high-ability learners do not. In other words, it increases when low-ability learners know the word for some reason other than the easiness of the word, and vice versa. Table 4 is constructed from the weight vectors of the proposed model. The weight vectors are trained in the in-matrix setting. Out of 179, 985 data points, 177, 985 were used for training. Features and hyper parameter tuning are explained in \u00a77.1. 2, 000 data points were used to check the accuracy, which was 83.40%.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 11, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 319, "end": 326, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "(v) def = 1 |U| u\u2208U f (u, v) \u2212 M ean(v) 2 = 1 |U| u\u2208U \u0175 \u22a4 u \u03c6 (v) \u2212 M ean (v) 2 . (16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "For example, it is very interesting and noteworthy that the word \"twitter\" comes at the top of the list of Table 4 . This is presumably due to the famous micro-blogging service, Twitter. The word \"twitter\" itself is a rare word. For example, in the British National Corpus, the frequency of the word \"twitter\" is merely 17 while the word \"the\" is 6, 043, 900. The words whose frequency is the same with the word \"twitter\" are: \"abet\", \"beguile\", and \"coddle\". Since these three words are in the dataset as well, the rareness of words only cannot explain the large variance of the word \"twitter\". This dataset was created in Japan in January 2009 when Twitter was not as predominant as it is today. Therefore, some low-level learners knew the word \"twitter\" through the name of the service while some high-level learners did not. Additionally, Table 4 ranks another similar example at the third: \"kindle\". The first Amazon Kindle was released in the United States in 2007.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 843, "end": 850, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "Likewise, we annotated presumable reasons Var(v) increased in the rightmost column of Table 4 . Although these reasons are speculation, it is difficult to find the correct reason learners know a word, even for learners themselves, because we usually do not remember how we learned foreign words. Our speculations are intuitive and understandable for Japanese-native English as a Second Language (ESL) learners.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "Product name The words \"twitter\" and \"kindle\" correspond to this case. When a difficult word is used as the name of a famous product, it is possible that even low-ability learners would know the word through the name of the product, which makes the variance larger. Loanwords in L1 Some words in the second language are borrowed by the learners' native language, or L1, i.e., loanwords. However, the spelling of loanwords in L1 can differ from its original. For example, in the case of the word \"mantle\", the corresponding loanword in Japanese, the native language for most of learners of the dataset used, is spelled as \"mantoru\". Therefore, the difficulty has little influence on whether or not learners know the word in this case. Rather, whether or not the learner can perceive the loanword in spite of spelling difference has more influence. Thus, even low-ability learners can perceive the meaning of the word through its corresponding loanword in L1, which makes the variance larger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "Var(v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of learner-specificity", "sec_num": "7.2" }, { "text": "If there are two words that are homophones in the learners' native language, and one of the two words is easier than the other, a low-ability learner may mistake the difficult one for the easy one. For example, a large variance of the word \"rink\" is caused by low-ability learners' mistake for the word \"link\" because the Japanese language does not distinguish \"l\" and \"r\". For example, Japanese has no distinction between \"par\" and \"per\", the large variance of the word \"parson\" is presumably due to some learners mistaking this word for the word \"person\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophones in L1", "sec_num": null }, { "text": "Topic specific Low-ability learners interested in a topic are likely to know the words of that topic regardless of the words' difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophones in L1", "sec_num": null }, { "text": "Homonyms in English \"smelt\" is a verb that means extracting metals by heat. Yet, it is also the past participle of the word \"smell\". Although the conjugated forms were removed from this dataset, some low-ability learners presumably did not notice it and thought that they were asked if they knew the word \"smelt\" as the past participle of the word \"smell\". Some high-ability learners presumably knew that the word \"smelt\" has a meaning other than the past participle of \"smell\" and not asked about \"smelt\" as the past participle. If they did not know what was the meaning other than the past participle of \"smell\", they answered no in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Homophones in L1", "sec_num": null }, { "text": "Note that the variance of the learners' response y for a word in the raw data cannot produce an interesting listing as in Table 4 because y is binary, 0 or 1. It trivially lists words of which half the learners in the dataset know. For example, if there are 15 learners in a data set, it is trivial to determine the words with the highest variance of y as those that 8 learners knew and 7 learners did not, or 7 learners knew and 8 learners did not. This means that many words have the highest y variance. In this dataset, 1, 408 of 11, 999 words had the highest y variance. Therefore, y variance does not produce any interesting results.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Homophones in L1", "sec_num": null }, { "text": "In contrast to Table 4 , the words with smallest Var(v) are trivial. They are words all the learners knew or all the learners did not know. The 30 words with the smallest variances were: am, beach, doll, during, eastern, equal, excellent, green, handwriting, hungry, important, logic, love, luck, marine, paradise, shop, technical, writing, pet, unknown, loose, maker, acquittal, arduous, cot, exchequer, hindsight, innuendo, and purr. Finally, we investigated the accuracy in the out-of-sample setting. We split the 11, 999 words into 2, 000 words for the test set and the rest for the training set. The size of training data was 149, 985 and the size of test data was 30, 000. Hyper parameter tuning and feature set were the same as we stated in \u00a77.1. The Rasch model achieved 66.32%, the shared difficulty model (Ehara et al., 2010) achieved 77.67%, and the proposed model achieved 77.81%.", "cite_spans": [ { "start": 191, "end": 435, "text": "beach, doll, during, eastern, equal, excellent, green, handwriting, hungry, important, logic, love, luck, marine, paradise, shop, technical, writing, pet, unknown, loose, maker, acquittal, arduous, cot, exchequer, hindsight, innuendo, and purr.", "ref_id": null }, { "start": 815, "end": 835, "text": "(Ehara et al., 2010)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Homophones in L1", "sec_num": null }, { "text": "The proposed model is mathematically very similar to those proposed by Evgeniou and Pontil (2004) and Kajino et al. (2012) . However, these models are for totally different purposes than ours: Evgeniou and Pontil (2004) aimed at multi-task learning and Kajino et al. (2012) aimed at crowd-sourcing. As the Rasch model is rarely used for these purposes, they did not mention the relationship between the Rasch and proposed models, let alone the generalization of the likelihood of the Rasch model. Strictly speaking, these two models differ from our model in that they do not include the Rasch and shared difficulty models (Ehara et al., 2010) as special cases while our proposed model does.", "cite_spans": [ { "start": 71, "end": 97, "text": "Evgeniou and Pontil (2004)", "ref_id": "BIBREF6" }, { "start": 102, "end": 122, "text": "Kajino et al. (2012)", "ref_id": "BIBREF13" }, { "start": 193, "end": 219, "text": "Evgeniou and Pontil (2004)", "ref_id": "BIBREF6" }, { "start": 253, "end": 273, "text": "Kajino et al. (2012)", "ref_id": "BIBREF13" }, { "start": 622, "end": 642, "text": "(Ehara et al., 2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "We extended word difficulty to learner-specific word difficulty by focusing on the analysis of the vocabulary knowledge of adult second language learners. Aside from second languages, study of vocabulary knowledge is also important for the analysis of child development in terms of native language. In computational linguistics, Kireyev and Landauer (2011) proposed an extension of word difficulty called \"word maturity\" by focusing on the analysis of child development in terms of native language. Their extension was aimed at \"track the degree of knowledge of each word at different stages of language learning\" using latent semantic analysis. Thus, both their purpose and method of extending word difficulty differ from ours.", "cite_spans": [ { "start": 329, "end": 356, "text": "Kireyev and Landauer (2011)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "While few have studied the vocabulary prediction task, prediction of text readability has been of great focus (Fran\u00e7ois and Fairon, 2012; Feng et al., 2010; Kate et al., 2010) in computational linguistics. The relationship between vocabulary knowledge and text readability has been thoroughly studied by educational experts (Nation, 2006) .", "cite_spans": [ { "start": 110, "end": 137, "text": "(Fran\u00e7ois and Fairon, 2012;", "ref_id": "BIBREF10" }, { "start": 138, "end": 156, "text": "Feng et al., 2010;", "ref_id": "BIBREF8" }, { "start": 157, "end": 175, "text": "Kate et al., 2010)", "ref_id": "BIBREF14" }, { "start": 324, "end": 338, "text": "(Nation, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "A substantial amount of work has been done by mainly SLA experts in estimating vocabulary size. Two major testing approaches have been proposed: multiple-choice, (Nation, 1990) , and Yes/No (Meara and Buxton, 1987) . For Yes/No tests, Eyckmans (2004) studied the validity and relation to readability prediction.", "cite_spans": [ { "start": 162, "end": 176, "text": "(Nation, 1990)", "ref_id": "BIBREF19" }, { "start": 190, "end": 214, "text": "(Meara and Buxton, 1987)", "ref_id": "BIBREF18" }, { "start": 235, "end": 250, "text": "Eyckmans (2004)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "In the field of psychology, the shared difficulty model (Ehara et al., 2010) is almost mathematically identical to the linear logistic test model (LLTM) (Fischer, 1983) . Also, the vocabulary that humans memorize is studied as \"mental lexicon\" (Amano and Kondo, 1998) , although most of the mental-lexicon work is not aimed at predicting vocabulary.", "cite_spans": [ { "start": 56, "end": 76, "text": "(Ehara et al., 2010)", "ref_id": "BIBREF5" }, { "start": 153, "end": 168, "text": "(Fischer, 1983)", "ref_id": "BIBREF9" }, { "start": 244, "end": 267, "text": "(Amano and Kondo, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "8" }, { "text": "We proposed a model for the vocabulary prediction task. Although there have been few studies on it, it is interesting from both theoretical and practical points of views.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "We introduced three preferred properties for predictors for this task: interpretable weight vector, out-of-sample setting, and learner-specific word difficulty. Typical machine-learning classifiers, such as SVMs, lack the first property, interpretable weight vector. Although the Rasch model has this property, it lacks the latter two properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "To understand why the Rasch model lacks the latter two properties, we introduced the general form of the Rasch model. From this general form, we derived our proposed model, which supports the latter two properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "In the qualitative evaluation, we wanted to see which words are the most learner-specific. Therefore, we introduced the variance of learner-specific word difficulty and listed the top 30 words with largest variances. The results exhibited social aspects of the learners. For example, \"twitter\" and \"kindle\" came first and third, which suggests that some low-ability learners know these words through service and product names, although they are usually difficult English words. Note that this analysis is possible because the proposed model supports the third property, learner-specific difficulty. Since the current models do not support this property, this analysis is impossible with these models. Moreover, the proposed model achieved accuracy competitive with the current models under the out-of-sample setting, which is more realistic than the in-matrix setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "Future work includes using topic models to determine learners' specialties. We also plan to introduce a sparse prior, such as Laplace prior, instead of Gaussian prior on the user-specific weight vector in Eq. 11to obtain a more concise model in which the weights specific to each user only deviate from the overall weights.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Estimation of mental lexicon size with word familiarity database", "authors": [ { "first": "S", "middle": [], "last": "Amano", "suffix": "" }, { "first": "T", "middle": [], "last": "Kondo", "suffix": "" } ], "year": 1998, "venue": "Fifth International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amano, S. and Kondo, T. (1998). Estimation of mental lexicon size with word familiarity database. In Fifth International Conference on Spoken Language Processing.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Item Response Theory: Parameter Estimation Techniques", "authors": [ { "first": "F", "middle": [ "B" ], "last": "Baker", "suffix": "" }, { "first": "S.-H", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baker, F. B. and Kim, S.-H. (2004). Item Response Theory: Parameter Estimation Techniques. Marcel Dekker, New York, second edition.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Web 1T 5-gram Version 1. LDC2006T13", "authors": [ { "first": "T", "middle": [], "last": "Brants", "suffix": "" }, { "first": "A", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brants, T. and Franz, A. (2006). Web 1T 5-gram Version 1. LDC2006T13.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "N-grams data from the corpus of contemporary american english (coca)", "authors": [ { "first": "M", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davies, M. (2011). N-grams data from the corpus of contemporary american english (coca).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Personalized reading support for second-language web documents by collective intelligence", "authors": [ { "first": "Y", "middle": [], "last": "Ehara", "suffix": "" }, { "first": "N", "middle": [], "last": "Shimizu", "suffix": "" }, { "first": "T", "middle": [], "last": "Ninomiya", "suffix": "" }, { "first": "H", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 15th international conference on Intelligent user interfaces (IUI 2010)", "volume": "", "issue": "", "pages": "51--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehara, Y., Shimizu, N., Ninomiya, T., and Nakagawa, H. (2010). Personalized reading support for second-language web documents by collective intelligence. In Proceedings of the 15th international conference on Intelligent user interfaces (IUI 2010), pages 51-60, Hong Kong, China. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Regularized multi-task learning", "authors": [ { "first": "T", "middle": [], "last": "Evgeniou", "suffix": "" }, { "first": "M", "middle": [], "last": "Pontil", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 10th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "109--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeniou, T. and Pontil, M. (2004). Regularized multi-task learning. In Proceedings of the 10th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2004), pages 109-117. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Measuring receptive vocabulary size : reliability and validity of the yes/no vocabulary test for French-speaking learners of Dutch", "authors": [ { "first": "J", "middle": [], "last": "Eyckmans", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eyckmans, J. (2004). Measuring receptive vocabulary size : reliability and validity of the yes/no vocabulary test for French-speaking learners of Dutch. PhD thesis, Radboud University Nijmegen.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A comparison of features for automatic readability assessment", "authors": [ { "first": "L", "middle": [], "last": "Feng", "suffix": "" }, { "first": "M", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "M", "middle": [], "last": "Huenerfauth", "suffix": "" }, { "first": "N", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "276--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng, L., Jansche, M., Huenerfauth, M., and Elhadad, N. (2010). A comparison of features for automatic readability assessment. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010): Posters, pages 276-284, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Logistic latent trait models with linear constraints", "authors": [ { "first": "G", "middle": [], "last": "Fischer", "suffix": "" } ], "year": 1983, "venue": "Psychometrika", "volume": "48", "issue": "1", "pages": "3--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fischer, G. (1983). Logistic latent trait models with linear constraints. Psychometrika, 48(1):3- 26.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An \"ai readability\" formula for french as a foreign language", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "C", "middle": [], "last": "Fairon", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "466--477", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois, T. and Fairon, C. (2012). An \"ai readability\" formula for french as a foreign language. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012), pages 466-477, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Brown corpus manual. Brown university, Rhodes island", "authors": [ { "first": "W", "middle": [ "N" ], "last": "Francis", "suffix": "" }, { "first": "H", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis, W. N. and Kucera, H. (1979). Brown corpus manual. Brown university, Rhodes island, third edition.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The open american national corpus (oanc)", "authors": [ { "first": "N", "middle": [], "last": "Ide", "suffix": "" }, { "first": "K", "middle": [], "last": "Suderman", "suffix": "" } ], "year": 2007, "venue": "Corpus available at \u2764tt\u2663\u273f\u2734\u2734\u2707\u2707\u2707\u2733\u2746\u2660\u2761r\u2710\u275d\u275b\u2665\u25c6\u275bt\u2710\u2666\u2665\u275b\u2022\u2748\u2666r\u2663\u2709s\u2733\u2666r\u2763\u2734\u2756\u2746\u25c6\u2748\u2734", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ide, N. and Suderman, K. (2007). The open american national corpus (oanc). Corpus available at \u2764tt\u2663\u273f\u2734\u2734\u2707\u2707\u2707\u2733\u2746\u2660\u2761r\u2710\u275d\u275b\u2665\u25c6\u275bt\u2710\u2666\u2665\u275b\u2022\u2748\u2666r\u2663\u2709s\u2733\u2666r\u2763\u2734\u2756\u2746\u25c6\u2748\u2734 (Retrieved on October 24, 2012).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A convex formulation for learning from crowds", "authors": [ { "first": "H", "middle": [], "last": "Kajino", "suffix": "" }, { "first": "Y", "middle": [], "last": "Tsuboi", "suffix": "" }, { "first": "H", "middle": [], "last": "Kashima", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 26th Conference on Artificial Intelligence (AAAI-2012)", "volume": "", "issue": "", "pages": "73--79", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kajino, H., Tsuboi, Y., and Kashima, H. (2012). A convex formulation for learning from crowds. In Proceedings of the 26th Conference on Artificial Intelligence (AAAI-2012), pages 73-79, Tronto, Ontario, Canada.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning to predict readability using diverse linguistic features", "authors": [ { "first": "R", "middle": [], "last": "Kate", "suffix": "" }, { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "M", "middle": [], "last": "Franz", "suffix": "" }, { "first": "R", "middle": [], "last": "Florian", "suffix": "" }, { "first": "R", "middle": [], "last": "Mooney", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "C", "middle": [], "last": "Welty", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "546--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kate, R., Luo, X., Patwardhan, S., Franz, M., Florian, R., Mooney, R., Roukos, S., and Welty, C. (2010). Learning to predict readability using diverse linguistic features. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 546-554, Beijing, China. Coling 2010 Organizing Committee.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Word maturity: Computational modeling of word knowledge", "authors": [ { "first": "K", "middle": [], "last": "Kireyev", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011)", "volume": "", "issue": "", "pages": "299--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kireyev, K. and Landauer, T. K. (2011). Word maturity: Computational modeling of word knowledge. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), pages 299-308, Portland, Oregon, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A vocabulary-size test of controlled productive ability. Language testing", "authors": [ { "first": "B", "middle": [], "last": "Laufer", "suffix": "" }, { "first": "P", "middle": [], "last": "Nation", "suffix": "" } ], "year": 1999, "venue": "", "volume": "16", "issue": "", "pages": "33--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laufer, B. and Nation, P. (1999). A vocabulary-size test of controlled productive ability. Language testing, 16(1):33-51.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "On the limited memory BFGS method for large scale optimization", "authors": [ { "first": "D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [], "last": "Nocedal", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming", "volume": "45", "issue": "1", "pages": "503--528", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, D. and Nocedal, J. (1989). On the limited memory BFGS method for large scale optimiza- tion. Mathematical Programming, 45(1):503-528.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An alternative to multiple choice vocabulary tests", "authors": [ { "first": "P", "middle": [], "last": "Meara", "suffix": "" }, { "first": "B", "middle": [], "last": "Buxton", "suffix": "" } ], "year": 1987, "venue": "Language Testing", "volume": "4", "issue": "2", "pages": "142--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meara, P. and Buxton, B. (1987). An alternative to multiple choice vocabulary tests. Language Testing, 4(2):142-154.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Teaching and Learning Vocabulary", "authors": [ { "first": "I", "middle": [ "S P" ], "last": "Nation", "suffix": "" } ], "year": 1990, "venue": "Heinle and Heinle", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nation, I. S. P. (1990). Teaching and Learning Vocabulary. Heinle and Heinle, Boston, MA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "How large a vocabulary is needed for reading and listening? Canadian Modern Language Review", "authors": [], "year": 2006, "venue": "", "volume": "63", "issue": "", "pages": "59--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nation, I. S. P. (2006). How large a vocabulary is needed for reading and listening? Canadian Modern Language Review, 63(1):59-82.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "libLBFGS: L-BFGS library written in C. Software available at \u2764tt\u2663\u273f \u2734\u2734\u2707\u2707\u2707\u2733\u275d\u2764\u2666\u2766\u2766\u275b\u2665\u2733\u2666r\u2763\u2734s\u2666\u2762t\u2707\u275br\u2761\u2734\u2022\u2710\u275c\u2022\u275c\u2762\u2763s\u2734", "authors": [ { "first": "N", "middle": [], "last": "Okazaki", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Okazaki, N. (2007). libLBFGS: L-BFGS library written in C. Software available at \u2764tt\u2663\u273f \u2734\u2734\u2707\u2707\u2707\u2733\u275d\u2764\u2666\u2766\u2766\u275b\u2665\u2733\u2666r\u2763\u2734s\u2666\u2762t\u2707\u275br\u2761\u2734\u2022\u2710\u275c\u2022\u275c\u2762\u2763s\u2734 (Retrieved on October 24, 2012).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Probabilistic Models for Some Intelligence and Attainment Tests", "authors": [ { "first": "G", "middle": [], "last": "Rasch", "suffix": "" } ], "year": 1960, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rasch, G. (1960). Probabilistic Models for Some Intelligence and Attainment Tests. Danish Institute for Educational Research, Copenhagen.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Developing and exploring the behaviour of two new versions of the vocabulary levels test. Language Testing", "authors": [ { "first": "N", "middle": [], "last": "Schmitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Schmitt", "suffix": "" }, { "first": "C", "middle": [], "last": "Clapham", "suffix": "" } ], "year": 2001, "venue": "", "volume": "18", "issue": "", "pages": "55--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schmitt, N., Schmitt, D., and Clapham, C. (2001). Developing and exploring the behaviour of two new versions of the vocabulary levels test. Language Testing, 18(1):55-88.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The british national corpus", "authors": [ { "first": "Bnc", "middle": [], "last": "The", "suffix": "" }, { "first": "", "middle": [], "last": "Consortium", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The BNC Consortium (2007). The british national corpus, version 3 (bnc xml edition).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Distributed by Oxford University Computing Services on behalf of the BNC Consortium. URL: \u2764tt\u2663\u273f\u2734\u2734\u2707\u2707\u2707\u2733\u2665\u275bt\u275d\u2666r\u2663\u2733\u2666\u2460\u2733\u275b\u275d\u2733\u2709\u2766\u2734 (Retrieved on October 26", "authors": [], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Distributed by Oxford University Computing Services on behalf of the BNC Consortium. URL: \u2764tt\u2663\u273f\u2734\u2734\u2707\u2707\u2707\u2733\u2665\u275bt\u275d\u2666r\u2663\u2733\u2666\u2460\u2733\u275b\u275d\u2733\u2709\u2766\u2734 (Retrieved on October 26, 2012).", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Two problem settings; (a) in-matrix, (b) out-of-sample.", "type_str": "figure", "num": null }, "FIGREF1": { "uris": null, "text": "Learner-specific word difficulty.", "type_str": "figure", "num": null }, "FIGREF2": { "uris": null, "text": "illustrates the difference between the shared word difficulty and learner-specific word difficulty. On the left side of the difficulty axis, words are plotted according to difficulty. On theNameDesign of fPriors Notes", "type_str": "figure", "num": null }, "TABREF0": { "type_str": "table", "text": "used. The dataset was created in Japan in January 2009. Sixteen English as a second language learners participated in the creation of this dataset. Most were graduate students of the University of Tokyo, and Japanese was the native language of most of them.", "html": null, "content": "
Corpus nameType of EnglishSize (in token) Description
British National Cor-British100 mil. General corpus
pus (BNC) (The BNC
Consortium, 2007)
The Corpus of Con-American450 mil. General corpus
temporary American
English(COCA)
(Davies, 2011)
OpenAmericanAmerican14 mil. General corpus
NationalCorpus
(OANC) (Ide and
Suderman, 2007)
Brown corpus (Fran-American1 mil. General corpus
cis and Kucera,
1979)
Google1-gram
", "num": null }, "TABREF2": { "type_str": "table", "text": "Top 30 words with largest variances Var(v) in descending order. Large Var(v) suggests large learner-specificity. Japanese is the native language (L1) of this dataset.", "html": null, "content": "", "num": null } } } }