{ "paper_id": "P19-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:30:34.521938Z" }, "title": "Manipulating the Difficulty of C-Tests", "authors": [ { "first": "Ji-Ung", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "Ubiquitous Knowledge Processing (UKP) Lab and Research Training Group", "institution": "Technische Universit\u00e4t Darmstadt", "location": { "country": "Germany" } }, "email": "" }, { "first": "Erik", "middle": [], "last": "Schwan", "suffix": "", "affiliation": { "laboratory": "Ubiquitous Knowledge Processing (UKP) Lab and Research Training Group", "institution": "Technische Universit\u00e4t Darmstadt", "location": { "country": "Germany" } }, "email": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "", "affiliation": { "laboratory": "Ubiquitous Knowledge Processing (UKP) Lab and Research Training Group", "institution": "Technische Universit\u00e4t Darmstadt", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose two novel manipulation strategies for increasing and decreasing the difficulty of C-tests automatically. This is a crucial step towards generating learner-adaptive exercises for self-directed language learning and preparing language assessment tests. To reach the desired difficulty level, we manipulate the size and the distribution of gaps based on absolute and relative gap difficulty predictions. We evaluate our approach in corpus-based experiments and in a user study with 60 participants. We find that both strategies are able to generate C-tests with the desired difficulty level.", "pdf_parse": { "paper_id": "P19-1035", "_pdf_hash": "", "abstract": [ { "text": "We propose two novel manipulation strategies for increasing and decreasing the difficulty of C-tests automatically. This is a crucial step towards generating learner-adaptive exercises for self-directed language learning and preparing language assessment tests. To reach the desired difficulty level, we manipulate the size and the distribution of gaps based on absolute and relative gap difficulty predictions. We evaluate our approach in corpus-based experiments and in a user study with 60 participants. We find that both strategies are able to generate C-tests with the desired difficulty level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning languages is of utmost importance in an international society and formulated as a major political goal by institutions such as the European Council, who called for action to \"teaching at least two foreign languages\" (EC, 2002, p. 20) . But also beyond Europe, there is a huge demand for language learning worldwide due to increasing globalization, digital communication, and migration.", "cite_spans": [ { "start": 225, "end": 242, "text": "(EC, 2002, p. 20)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Among multiple different learning activities required for effective language learning, we study one particular type of exercise in this paper: Ctests are a special type of cloze test in which the second half of every second word in a given text is replaced by a gap (Klein-Braley and Raatz, 1982) . Figure 1 (a) shows an example. To provide context, the first and last sentences of the text do not contain any gaps. C-tests rely on the reduced redundancy principle (Spolsky, 1969) arguing that a language typically employs more linguistic information than theoretically necessary to communicate unambiguously. Proficient speakers intuitively understand an utterance even if the level of redundancy is reduced (e.g., when replacing a word's suffix with a gap), whereas learners typically rely on the redundant signal to extrapolate the meaning of an utterance.", "cite_spans": [ { "start": 266, "end": 296, "text": "(Klein-Braley and Raatz, 1982)", "ref_id": null }, { "start": 465, "end": 480, "text": "(Spolsky, 1969)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 299, "end": 311, "text": "Figure 1 (a)", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Besides general vocabulary knowledge, C-tests require orthographic, morphologic, syntactic, and semantic competencies (Chapelle, 1994) to correctly fill in all gaps, which make them a frequently used tool for language assessment (e.g., placement tests). Given that C-tests can be easily generated automatically by introducing gaps into an arbitrary text and that there is usually only a single correct answer per gap given its context, C-tests are also relevant for self-directed language learning and massive open online courses (MOOC), where largescale personalized exercise generation is necessary.", "cite_spans": [ { "start": 118, "end": 134, "text": "(Chapelle, 1994)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A crucial question for such tasks is predicting and manipulating the difficulty of a C-test. For language assessment, it is important to generate C-tests with a certain target difficulty to allow for comparison across multiple assessments. For selfdirected language learning and MOOCs, it is important to adapt the difficulty to the learner's current skill level, as an exercise should be neither too easy nor too hard so as to maximize the learning effect and avoid boredom and frustration (Vygotsky, 1978) . Automatic difficulty prediction of C-tests is hard, even for humans, which is why there have been many attempts to theoretically explain C-test difficulty (e.g., Sigott, 1995) and to model features used in machine learning systems for automatic difficulty prediction (e.g., Beinborn et al., 2014) .", "cite_spans": [ { "start": 491, "end": 507, "text": "(Vygotsky, 1978)", "ref_id": "BIBREF27" }, { "start": 672, "end": 685, "text": "Sigott, 1995)", "ref_id": "BIBREF21" }, { "start": 784, "end": 806, "text": "Beinborn et al., 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While state-of-the-art systems produce good prediction results compared to humans (Beinborn, 2016) , there is yet no work on automatically manipulating the difficulty of C-tests. Instead, C-tests are generated according to a fixed scheme and manually post-edited by teachers, who might use the predictions as guidance. But this procedure is extremely time-consuming for language assessment and no option for large-scale self-directed learning.", "cite_spans": [ { "start": 82, "end": 98, "text": "(Beinborn, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose and evaluate two strategies for automatically changing the gaps of a C-test in order to reach a given target difficulty. Our first Figure 1: C-tests with (a) standard gap scheme, (b) manipulated gap position, and (c) manipulated gap size strategy varies the distribution of the gaps in the underlying text and our second strategy learns to decide to increase or decrease a gap in order to make the test easier or more difficult. Our approach breaks away from the previously fixed C-test creation scheme and explores new ways of motivating learners by using texts they are interested in and generating tests from them at the appropriate level of difficulty. We evaluate our strategies both automatically and in a user study with 60 participants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In language learning research, there is vast literature on cloze tests. For example, Taylor (1953) studies the relation of cloze tests and readability. In contrast to C-tests (Klein-Braley and Raatz, 1982) , cloze tests remove whole words to produce a gap leading to more ambiguous solutions. Chapelle and Abraham (1990) contrast four types of cloze tests, including fixed-ratio cloze tests replacing every i th word with a gap, rational cloze tests that allow selecting the words to replace according to the language trait that should be assessed, multiple-choice tests, and C-tests. Similar to our work, they conduct a user study and measure the difficulty posed by the four test types. They find that cloze tests replacing entire words with a gap are more difficult than C-tests or multiplechoice tests. In our work, we go beyond this by not only varying between gaps spanning the entire word (cloze test) or half of the word (C-test), but also changing the size of the C-test gaps. Laufer and Nation (1999) propose using C-tests to assess vocabulary knowledge. To this end, they manually construct C-tests with only a single gap, but use larger gaps than half of the word's letters. Our work is different to these previous works, since we test varying positions and sizes for C-test gaps and, more importantly, we aim at manipulating the difficulty of a C-test automatically by learning to predict the difficulty of the gaps and how their manipulation affects the difficulty.", "cite_spans": [ { "start": 85, "end": 98, "text": "Taylor (1953)", "ref_id": "BIBREF24" }, { "start": 175, "end": 205, "text": "(Klein-Braley and Raatz, 1982)", "ref_id": null }, { "start": 293, "end": 320, "text": "Chapelle and Abraham (1990)", "ref_id": "BIBREF4" }, { "start": 986, "end": 1010, "text": "Laufer and Nation (1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous work on automatically controlling and manipulating test difficulty has largely focused on multiple-choice tests by generating appropriate distractors (i.e., incorrect solutions). Wojatzki et al. (2016) avoid ambiguity of their generated distractors, Hill and Simha (2016) fit them to the context, and Perez and Cuadros (2017) consider multiple languages. Further work by Zesch and Melamud (2014) , Beinborn (2016) , and Lee and Luo (2016) employ word difficulty, lexical substitution, and the learner's answer history to control distractor difficulty.", "cite_spans": [ { "start": 188, "end": 210, "text": "Wojatzki et al. (2016)", "ref_id": "BIBREF28" }, { "start": 259, "end": 280, "text": "Hill and Simha (2016)", "ref_id": "BIBREF9" }, { "start": 380, "end": 404, "text": "Zesch and Melamud (2014)", "ref_id": "BIBREF29" }, { "start": 407, "end": 422, "text": "Beinborn (2016)", "ref_id": "BIBREF1" }, { "start": 429, "end": 447, "text": "Lee and Luo (2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For C-tests, Kamimoto (1993) and Sigott (2006) study features of hand-crafted tests that influence the difficulty, and Beinborn et al. (2014) and Beinborn (2016) propose an automatic approach to estimate C-test difficulty, which we use as a starting point for our work.", "cite_spans": [ { "start": 13, "end": 28, "text": "Kamimoto (1993)", "ref_id": "BIBREF10" }, { "start": 33, "end": 46, "text": "Sigott (2006)", "ref_id": "BIBREF22" }, { "start": 119, "end": 141, "text": "Beinborn et al. (2014)", "ref_id": "BIBREF0" }, { "start": 146, "end": 161, "text": "Beinborn (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another related field of research in computerassisted language learning is readability assessment and, subsequently, text simplification. There exists ample research on predicting the reading difficulty for various learner groups (Hancke et al., 2012; Collins-Thompson, 2014; Pil\u00e1n et al., 2014) . A specific line of research focuses on reducing the reading difficulty by text simplification (Chandrasekar et al., 1996) . By reducing complex texts or sentences to simpler ones, more texts are made accessible for less proficient learners. This is done either on a word level by substituting difficult words with easier ones (e.g., Kilgarriff et al., 2014) or on a sentence level (Vajjala and Meurers, 2014) . More recent work also explores sequence-to-sequence neural network architectures for this task (Nisioi et al., 2017) . Although the reading difficulty of a text partly contributes to the overall exercise difficulty of C-tests, there are many other factors with a substantial influence (Sigott, 1995) . In particular, we can generate many different C-tests from the same text and thus reading difficulty and text simplification alone are not sufficient to determine and manipulate the difficulty of C-tests. ", "cite_spans": [ { "start": 230, "end": 251, "text": "(Hancke et al., 2012;", "ref_id": "BIBREF8" }, { "start": 252, "end": 275, "text": "Collins-Thompson, 2014;", "ref_id": "BIBREF5" }, { "start": 276, "end": 295, "text": "Pil\u00e1n et al., 2014)", "ref_id": "BIBREF19" }, { "start": 392, "end": 419, "text": "(Chandrasekar et al., 1996)", "ref_id": "BIBREF2" }, { "start": 631, "end": 655, "text": "Kilgarriff et al., 2014)", "ref_id": "BIBREF11" }, { "start": 679, "end": 706, "text": "(Vajjala and Meurers, 2014)", "ref_id": "BIBREF25" }, { "start": 804, "end": 825, "text": "(Nisioi et al., 2017)", "ref_id": "BIBREF17" }, { "start": 994, "end": 1008, "text": "(Sigott, 1995)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We define a C-test T = (u, w 1 , . . . , w 2n , v, G) as a tuple of left and right context u and v (typically one sentence) enframing 2n words w i where n = |G| is the number of gaps in the gap set G. In each gap g = (i, ) \u2208 G, the last characters of word w i are replaced by a blank for the learners to fill in. Klein-Braley and Raatz (1982) propose the default gap generation scheme DEF with G = {(2j,", "cite_spans": [ { "start": 313, "end": 342, "text": "Klein-Braley and Raatz (1982)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "|w 2j | 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": ") | 1 \u2264 j \u2264 n} in order to trim the (larger) second half of every second word. Single-letter words, numerals, and punctuation are not counted as words w i and thus never contain gaps. Figure 1 (a) shows an example C-test generated with the DEF scheme.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 196, "text": "Figure 1 (a)", "ref_id": null } ], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "A major limitation of DEF is that the difficulty of a C-test is solely determined by the input text. Most texts, however, yield a medium difficulty (cf. section 6) and thus do not allow any adaptation to beginners or advanced learners unless they are manually postprocessed. In this paper, we therefore propose two strategies to manipulate the gap set G in order to achieve a given target difficulty \u03c4 \u2208 [0, 1] ranging from small values for beginners to high values for advanced learners. To estimate the difficulty", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "d(T ) = 1 |G| g\u2208G d(g) of a C-test T ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "we aggregate the predicted difficulty scores d(g) of each gap. In section 4, we reproduce the system by Beinborn (2016) modeling d(g) \u2248 e(g) as the estimated mean error rates e(g) per gap across multiple learners, and we conduct additional validation experiments on a newly acquired dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "The core of our work is the manipulation of the gap set G in order to minimize the difference |d(T ) \u2212 \u03c4 | between the predicted test difficulty d(T ) and the requested target difficulty \u03c4 . To this end, we employ our difficulty prediction system for validation and propose a new regression setup that predicts the relative change of d(g) when manipulating the size of a gap. Figure 2 shows our system architecture: Based on a text corpus, we generate C-tests for arbitrary texts (e.g., according to the learner's interests).", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 384, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "Then, we manipulate the difficulty of the generated text by employing the difficulty prediction system in order to reach the given target difficulty \u03c4 for a learner (i.e., the estimated learner proficiency) to provide neither too easy nor too hard tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "3" }, { "text": "Beinborn et al. 2014and Beinborn (2016) report state-of-the-art results for the C-test difficulty prediction task. However, there is yet no opensource implementation of their code and there is little knowledge about the performance of newer approaches. Therefore, we (1) conduct a reproduction study of Beinborn's (2016) system, (2) evaluate newer neural network architectures, and (3) validate the results on a newly acquired dataset.", "cite_spans": [ { "start": 24, "end": 39, "text": "Beinborn (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "Reproduction study. We obtain the original software and data from Beinborn (2016) . This system predicts the difficulty d(g) for each gap within a Ctest using a support vector machine (SVM; Vapnik, 1998) with 59 hand-crafted features. The proposed features are motivated by four factors which are deemed important for assessing the gap difficulty: item dependency, candidate ambiguity, word difficulty, and text difficulty. We use the same data (819 filled C-tests), metrics, and setup as Beinborn (2016) . That is, we perform leave-one-out cross validation (LOOCV) and measure the Pearson correlation \u03c1, the rooted mean squared error RMSE, and the quadratic weighted kappa qw\u03ba as reported in the original work.", "cite_spans": [ { "start": 66, "end": 81, "text": "Beinborn (2016)", "ref_id": "BIBREF1" }, { "start": 489, "end": 504, "text": "Beinborn (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "The left hand side of table 1 shows the results of our reproduced SVM compared to the original SVM results reported by Beinborn (2016) . Even though we reuse the same code as in their original work, we observe small differences between our reproduction and the previously reported scores.", "cite_spans": [ { "start": 119, "end": 134, "text": "Beinborn (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "We were able to trace these differences back to libraries and resources which have been updated and thus changed over time. One example is Ubuntu's system dictionary, the American English dictionary words (wamerican), on which the original system relies. We experiment with different versions of the dictionary between Ubuntu 14.04 (wamerican v.7.1.1) and 18.04 (wamerican v.2018.04.16-1) and observe differences of one or two percentage points. As a best practice, we suggest to fix the versions of all resources and avoid any system dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "Neural architectures. We compare the system with deep learning methods based on multi-layer Table 1 : Results of the difficulty prediction approaches. SVM (original) has been taken from Beinborn (2016) perceptrons (MLP) and bi-directional long shortterm memory (BiLSTM) architectures, which are able to capture non-linear feature dependencies. 1 To cope for the non-deterministic behavior of the neural networks, we repeat all experiments ten times with different random weight initializations and report the averaged results (Reimers and Gurevych, 2017) . While the MLP is trained similar as our reproduced SVM, the BiLSTM receives all gaps of a C-test as sequential input. We hypothesize that this sequence regression setup is better suited to capture gaps interdependencies. As can be seen from the Final model. We train our final SVM model on all available data (i.e., the original and the new data) and publish our source code and the trained model on GitHub. 2 Similar to Beinborn (2016), we 1 Network parameters and a description of the tuning process are provided in this paper's appendix.", "cite_spans": [ { "start": 186, "end": 201, "text": "Beinborn (2016)", "ref_id": "BIBREF1" }, { "start": 344, "end": 345, "text": "1", "ref_id": null }, { "start": 526, "end": 554, "text": "(Reimers and Gurevych, 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "2 https://github.com/UKPLab/ acl2019-ctest-difficulty-manipulation Algorithm 1 Gap selection strategy (SEL) 1: procedure GAPSELECTION(T , \u03c4 ) 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "GFULL \u2190 {(i, |w i | 2 | 1 \u2264 i \u2264 2n} 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "GSEL \u2190 \u2205 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "while |GSEL| < n do 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "G \u2264\u03c4 \u2190 {g \u2208 GFULL | d(g) \u2264 \u03c4 } 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "if |G \u2264\u03c4 | > 0 then 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "g * \u2190 arg ming\u2208G \u2264\u03c4 |d(g) \u2212 \u03c4 | 8: GSEL \u2190 GSEL \u222a {g * } 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "GFULL \u2190 GFULL \\ {g * } 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "G>\u03c4 \u2190 {g \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "GFULL | d(g) > \u03c4 } 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "if |G>\u03c4 | > 0 then 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "g * \u2190 arg ming\u2208G >\u03c4 |d(g) \u2212 \u03c4 | 13: GSEL \u2190 GSEL \u222a {g * } 14: GFULL \u2190 GFULL \\ {g * } 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "return GSEL cannot openly publish our dataset due to copyright.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Prediction", "sec_num": "4" }, { "text": "Given a C-test T = (u, w 1 , . . . , w 2n , v, G) and a target difficulty \u03c4 , the goal of our manipulation strategies is to find a gap set G such that d(T ) approximates \u03c4 . A na\u00efve way to achieve this goal would be to generate C-tests for all texts in a large corpus with the DEF scheme and use the one with minimal |d(T )\u2212\u03c4 |. However, most corpora tend to yield texts of a limited difficulty range that only suit a specific learner profile (cf. section 6). Another drawback of the na\u00efve strategy is that it is difficult to control for the topic of the underlying text and in the worst case, the necessity to search through a whole corpus for selecting a fitting C-test. In contrast to the na\u00efve strategy, our proposed manipulation strategies are designed to be used in real time and manipulate any given C-test within 15 seconds at an acceptable quality. 3 Both strategies operate on a given text (e.g., on a topic a learner is interested in) and manipulate its gap set G in order to come close to the learner's current language skill. The first strategy varies the position of the gaps and the second strategy learns to increase or decrease the size of the gaps.", "cite_spans": [ { "start": 858, "end": 859, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C-Test Difficulty Manipulation", "sec_num": "5" }, { "text": "The default C-test generation scheme DEF creates a gap in every second word w 2j , 1 \u2264 j \u2264 n. The core idea of our first manipulation strategy SEL is to distribute the n gaps differently among the all 2n words in order to create gaps for easier or harder words than in the default generation scheme. Therefore, we use the difficulty predic-(licensed under the Apache License 2.0).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "3 On an Intel-i5 with 4 CPUs and 16 GB RAM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "tion system to predict d(g) for any possible gap", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "g \u2208 G FULL = {(i, |w i | 2 ) | 1 \u2264 i \u2264 2n} (i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "e., assuming a gap in all words rather than in every second word). Then, we alternate between adding gaps to the resulting G SEL that are easier and harder than the preferred target difficulty \u03c4 , starting with those having a minimal difference |d(g) \u2212 \u03c4 |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "Algorithm 1 shows this procedure in pseudocode and figure 1 shows a C-test whose difficulty has been increased with this strategy. Note that it has selected gaps at corresponding rather than with, and soothsayers rather than the. Our proposed algorithm is optimized for runtime. An exhaustive search would require testing 2n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "n combinations if the number of gaps is constant. For n = 20, this yields 137 billion combinations. While more advanced optimization methods might find better gap selections, we show in section 6 that our strategy achieves good results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Selection Strategy", "sec_num": "5.1" }, { "text": "Our second manipulation strategy SIZE changes the size of the gaps based on a pre-defined gap set. Increasing a gap g = (i, ) by one or more characters, yielding g = (i, + k) increases its difficulty (i.e., d(g ) \u2265 d(g)), while smaller gaps make the gap easier. We identify a major challenge in estimating the effect of increasing or decreasing the gap size on the gap difficulty. Although d(g ) could be estimated using the full difficulty prediction system, the search space is even larger than for the gap selection strategy, since each of the n gaps has |w i |\u22122 possible gap sizes to test. For n = 20 and an average word length of six, this amounts to one trillion possible combinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "We therefore propose a new approach to predict the relative difficulty change of a gap g = (i, ) when increasing the gap size by one letter \u2206 inc (g) \u2248 d(g ) \u2212 d(g), g = (i, + 1) and correspondingly when decreasing the gap size by one letter", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "\u2206 dec (g) \u2248 d(g) \u2212 d(g ), g = (i, \u2212 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "The notion of relative difficulty change enables gap size manipulation in real time, since we do not have to invoke the full difficulty prediction system for all combinations. Instead, we can incrementally predict the effect of changing a single gap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "To predict \u2206 inc and \u2206 dec , we train two SVMs on all gap size combinations of 120 random texts from the Brown corpus (Francis, 1965) using the following features: predicted absolute gap difficulty, word length, new gap size, modified character, a Algorithm 2 Gap size strategy (SIZE) 1: procedure INCREASEDIFFICULTY(T , \u03c4 ) 2:", "cite_spans": [ { "start": 118, "end": 133, "text": "(Francis, 1965)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "GSIZE \u2190 GDEF 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "D \u2190 d(T ) 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "while D < \u03c4 do 5:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "g * = (i, ) \u2190 arg maxg\u2208G SIZE \u2206inc(g) 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "\u2190 + 1 7:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "D \u2190 D + \u2206inc(g) 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "return GSIZE binary indicator if the gap is at a th sound, and logarithmic difference of alternative solutions capturing the degree of ambiguity with varying gap size. With a final set of only six features, our new models are able to approximate the relative difficulty change very well deviating from the original system's prediction only by 0.06 RMSE for \u2206 inc and 0.13 RMSE for \u2206 dec . The predictions of both models highly correlate with the predictions achieving a Pearson's \u03c1 of over 0.8. Besides achieving a much faster average runtime of 0.056 seconds for the relative model vs. 11 seconds for the full prediction of a single change, we can invoke the relative model iteratively to estimate d(T ) for multiple changes of the gap size more efficiently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "The final manipulation strategy then requires just a single call of the full prediction system. If d(T ) < \u03c4 , we incrementally increase the gap sizes to make T more difficult and, vice-versa, decrease the gap sizes if d(T ) > \u03c4 . In each iteration, we modify the gap with the highest relative difficulty change in order to approach the given target difficulty \u03c4 as quickly as possible. Algorithm 2 shows pseudocode for creating G size with increased difficulty (i.e., d(T ) < \u03c4 ) based on the default gap scheme DEF. The procedure for d(T ) > \u03c4 works analogously, but using \u2206 dec and decreasing the gap size. Figure 1 (c) shows a much easier version of the example C-test, in which a learner often only has to complete the last one or two letters.", "cite_spans": [], "ref_spans": [ { "start": 610, "end": 622, "text": "Figure 1 (c)", "ref_id": null } ], "eq_spans": [], "section": "Gap Size Strategy", "sec_num": "5.2" }, { "text": "To evaluate our C-test manipulation strategies, we first test their ability to cover a higher range of target difficulties than the default generation scheme and then measure how well they meet the desired target difficulty for texts from different domains. We conduct our experiments on 1,000 randomly chosen paragraphs for each of the Gutenberg (Lahiri, 2014) , Reuters (Lewis et al., 2004) , and Brown (Francis, 1965) corpora. We conduct our experiments on English, but our strategies can be adapted to many related languages. To assess the maximal difficulty range our strategies can achieve, we generate C-tests with maximal (\u03c4 = 1) and minimal target difficulty (\u03c4 = 0) for both strategies S \u2208 {SEL, SIZE}, which are also shown in figure 3 as (S, \u03c4 ). Both strategies are able to clearly increase and decrease the test difficulty in the correct direction and they succeed in substantially increasing the total difficulty range beyond DEF. While SEL is able to reach lower difficulty ranges, it has bigger issues with generating very difficult tests. This is due to its limitation to the fixed gap sizes, whereas SIZE can in some cases create large gaps that are ambiguous or even unsolvable. Since SIZE is, however, limited to the 20 predefined gaps, it shows a higher variance. Especially short gaps such as is and it cannot be made more difficult. Combining the two strategies is thus a logical next step for future work, building upon our findings for both strategies. We make similar observations on the Reuters and Gutenberg corpora and provide the respective figures in the appendix.", "cite_spans": [ { "start": 347, "end": 361, "text": "(Lahiri, 2014)", "ref_id": "BIBREF13" }, { "start": 372, "end": 392, "text": "(Lewis et al., 2004)", "ref_id": "BIBREF16" }, { "start": 405, "end": 420, "text": "(Francis, 1965)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the Manipulation System", "sec_num": "6" }, { "text": "Manipulation quality. We finally evaluate how well each strategy S reaches a given target difficulty. That is, we sample a random corpus text and \u03c4 , create the C-test using strategy S, predict the test difficulty d(T ) and measure its difference to \u03c4 using RMSE. Table 2 shows the results for our three corpora. Throughout all three corpora, both manipulation strategies perform well. SEL consistently outperforms SIZE, which matches our observations from the previous experiment. Mind that these results depend on the quality of the au- Table 2 : RMSE for both strategies on each corpora with randomly sampled target difficulties \u03c4 tomatic difficulty predictions, which is why we conduct a user-based evaluation in the next section.", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 271, "text": "Table 2", "ref_id": null }, { "start": 539, "end": 546, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of the Manipulation System", "sec_num": "6" }, { "text": "Hypothesis. To evaluate the effectiveness of our manipulation strategies in a real setting, we conduct a user study and analyze the difficulty of the manipulated and unmanipulated C-tests. We investigate the following hypothesis: When increasing a test's difficulty using strategy S, the participants will make more errors and judge the test harder than a default C-test and, vice versa, when decreasing a test's difficulty using S, the participants will make less errors and judge the test easier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Experimental design. We select four different English texts from the Brown corpus and shorten them to about 100 words with keeping their paragraph structure intact. None of the four texts is particularly easy to read with an average grade level above 12 and a Flesh reading ease score ranging between 25 (very difficult) to 56 (fairly difficult).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "In the supplementary material, we provide results of an automated readability analysis using standard metrics. From the four texts, we then generate the C-tests T i , 1 \u2264 i \u2264 4 using the default generation scheme DEF. All tests contain exactly n = 20 gaps and their predicted difficulties d(T i ) are in a mid range between 0.24 and 0.28. T 1 remains unchanged in all test conditions and is used to allow the participants to familiarize with the task. For the remaining three texts, we generate an easier variant T S,dec i with target difficulty \u03c4 = 0.1 and a harder variant T S,inc i with \u03c4 = 0.5 for both strategies S \u2208 {SEL, SIZE}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "From these tests, we create 12 sequences of four C-tests that we give to the participants. Each participant receives T 1 first to familiarize with the task. Then, they receive one easy T S,dec i , one default T i , and one hard T S,inc i C-test for the same strategy S based on the texts i \u2208 {2, 3, 4} in random order without duplicates (e.g., the sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "T 1 T SEL,dec 2 T 3 T SEL,inc 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "). Having finished a C-test, we ask them to judge the difficulty of this test on a five-point Likert scale ranging from too easy to too hard. After solving the last test, we additionally collect a ranking of all four tests by their difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Data collection. We collect the data from our participants with a self-implemented web interface for solving C-tests. We create randomized credentials linked to a unique ID for each participant and obfuscate their order, such that we can distinguish them but cannot trace back their identity and thus avoid collecting any personal information. Additionally, we ask each participant for their consent on publishing the collected data. For experiments with a similar setup and task, we obtained the approval of the university's ethics commission. After login, the participants receive instructions and provide a self-assessment of their English proficiency and their time spent on language learning. The participants then solve the four successive C-tests without knowing the test difficulty or the manipulation strategy applied. They are instructed to spend a maximum of five minutes per C-test to avoid timebased effects and to prevent them from consulting external resources, which would bias the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Participants. A total of 60 participants completed the study. We uniformly distributed the 12 test sequences (six per strategy), such that we have 30 easy, 30 default, and 30 hard C-test results for each manipulation strategy. No participant is native in English, 17 are taking language courses, and 57 have higher education or are currently university students. The frequency of their use of English varies, as we found a similar number of participants using English daily, weekly, monthly, and (almost) never in practice. An analysis of the questionnaire is provided in the paper's appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Hypothesis testing. We evaluate our hypothesis along three dimensions: (1) the actual error rate of the participants, (2) the perceived difficulty after each individual C-test (Likert feedback), and (3) the participants' final difficulty ranking. While the latter forces the participants to provide an explicit ranking, the former allows them to rate C-tests equally difficult. We conduct significance testing at the Bonferroni-corrected \u03b1 = 0.05 2 = 0.025 for each dimension using one-tailed t-tests for the continuous error rates and one-tailed Mann-Whitney U tests for the ordinal-scaled perceived difficulties and rankings. Figure 4 shows notched boxplots of our results.", "cite_spans": [], "ref_spans": [ { "start": 628, "end": 636, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "To test our hypothesis, we first formulate a null .49, p < 10 \u22125 ) and the T S,inc i tests are significantly harder with an average error rate of 0.49 (t = \u22127.83, p < 10 \u22125 ), so we can safely reject the null hypothesis for error rates. Table 3 shows the error rates per C-test and strategy. Both SEL and SIZE are overall able to significantly (p < 0.025) increase and decrease the test's difficulty over DEF, and with the exception of T SEL, dec 4 , the effect is also statistically significant for all individual text and strategy pairs. Figure 5 shows the 30 participants per strategy on the x-axis and their error rates in their second to fourth C-test on the y-axis. C-tests, for which we increased the difficulty (S, inc), yield more errors than C-tests with decreased difficulty (S, dec) in all cases. The easier tests also yield less errors than the test with the default scheme DEF in most cases. While hard tests often have a much higher error rate than DEF, we find some exceptions, in which the participant's error rate is close or even below the DEF error rate.", "cite_spans": [ { "start": 438, "end": 442, "text": "SEL,", "ref_id": null }, { "start": 443, "end": 448, "text": "dec 4", "ref_id": null } ], "ref_spans": [ { "start": 237, "end": 244, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 540, "end": 548, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Regarding the perceived difficulty, we find that the participants judge the manipulated C-tests with lower d(T ) as easier on both the Likert scale (z = 6.16, p < 10 \u22125 ) and in the rankings (z = 6.59, p < 10 \u22125 ) based on the Mann-Whitney-U test. The same is true for C-tests that have been manipulated to a higher difficulty level, which the participant judge harder (z = \u22124.57, p < 10 \u22125 ) and rank higher (z = \u22123.86, p < 6 \u2022 10 \u22125 ). We therefore reject the null hypotheses for the Likert feedback and the rankings and conclude that both strategies can effectively manipulate a C-test's difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "Manipulation quality. We further investigate if the strategies yield different difficulty levels. There- fore, we use two-tailed significance testing between SEL and SIZE for all three dimensions. We find that SIZE yields significantly easier C-tests than SEL in terms of error rates (p = 0.0014) and Likert feedback (p = 6 \u2022 10 \u22125 ), and observe p = 0.0394 for the rankings. For increasing the difficulty, we, however, do not find significant differences between the two strategies. Since both strategies successfully modify the difficulty individually, this motivates research on combined strategies in the future. We furthermore investigate how well our strategies perform in creating C-tests with the given target difficulty \u03c4 . Table 4 shows the RMSE for e(T ) and d(T ) as well as for e(T ) and \u03c4 for both strategies. As expected, our difficulty prediction system works best for C-tests generated with DEF as they use the same scheme as C-tests in the training data. Though slightly worse than for DEF, we still find very low RMSE scores for manipulated Ctests. This is especially good when considering that the system's performance on our newly acquired dataset yields and RMSE of 0.21 (cf. section 6). Computing the RMSE with respect to our chosen target difficulties \u03c4 yields equally good results for SEL and exceptionally good results for SIZE. , all predictions are close to the optimum (i.e., the diagonal) and also close to the desired target difficulty \u03c4 .", "cite_spans": [], "ref_spans": [ { "start": 733, "end": 740, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "In a more detailed analysis, we find two main sources of problems demanding further investigation: First, the difficulty prediction quality when deviating from DEF and second, the increasing ambiguity in harder C-tests. However, it underestimates the d(T ) = 0.11 for T SEL, dec 4 (the same text used in figure 1), for which we found an actual error rate of 0.28. This is due to chains of four successive gaps, such as:", "cite_spans": [ { "start": 270, "end": 274, "text": "SEL,", "ref_id": null }, { "start": 275, "end": 280, "text": "dec 4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "gap g i wh w a solution is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "what we are d(g) 0.17 0.22 0.23 0.19 e(g) 0.70 0.40 0.10 0.20 As the prediction system has been trained only on DEF-generated C-tests, it underestimates d(g) for cases with limited context. It will be interesting for future work to focus on modeling gap interdependencies in C-tests deviating from DEF. Another issue we observe is that the gap size strategy might increase the ambiguity of the C-test. In the standard scheme, there is in most cases only a single correct answer per gap. In T SIZE,inc 2 , how-ever, the SIZE strategy increased the gap of the word professional to its maximal length yielding p . One participant answered popularising for this gap, which also fits the given context. We carefully checked our datasetfor other ambiguity, but only found one additional case: In T 4 , instead of the word close, 13 participants out of 30 used clear as a modifier of correspondence, which both produce meaningful contexts. Given that this case is already ambiguous in the DEF scheme yielding the gap cl , we conclude that the issue is not severe, but that the difficulty prediction system should be improved to better capture ambiguous cases; for example, by introducing collocational features weighted by their distribution within a corpus into \u2206 inc and \u2206 dec .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User-based Evaluation", "sec_num": "7" }, { "text": "In this work, we proposed two novel strategies for automatically manipulating the difficulty of C-test exercises. Our first strategy selects which words should be turned into a gap, and the second strategy learns to increase or decrease the size of the gaps. Both strategies automatically predict the difficulty of a test to make informed decisions. To this end, we reproduced previous results, compared them to neural architectures, and tested them on a newly acquired dataset. We evaluate our difficulty manipulation pipeline in a corpus-based study and with real users. We show that both strategies can effectively manipulate the C-test difficulty, as both the participants' error rates and their perceived difficulty yield statistically significant effects. Both strategies reach close to the desired difficulty level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our error analysis points out important directions for future work on detecting ambiguous gaps and modeling gap interdependencies for C-tests deviating from the default generation scheme. An important observation is that manipulating the gaps' size and position does not only influence the C-test difficulty, but also addresses different competencies (e.g., requires more vocabulary knowledge or more grammatical knowledge). Future manipulation strategies that take the competencies into account have the potential to train particular skills and to better control the competencies required for a placement test. Another strand of research will be combining both strategies and deploying the manipulation strategies in a large scale testing platform that allows the system to adapt to an individual learner over time. A core advantage of our ma-nipulation strategies is that we can work with any given text and thus provide C-tests that do not only have the desired difficulty, but also integrate the learner's interest or the current topic of a language course.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" } ], "back_matter": [ { "text": "This work has been supported by the Hessian research excellence program \"Landes-Offensive zur Entwicklung Wissenschaftlich-\u00f6konomischer Exzellenz\" (LOEWE) as part of the a! -automated language instruction project under grant No. 521/17-03 and by the German Research Foundation as part of the Research Training Group \"Adaptive Preparation of Information from Heterogeneous Sources\" (AIPHES) under grant No. GRK 1994/1. We thank the anonymous reviewers for their detailed and helpful comments. We furthermore thank the language center of the Technische Universit\u00e4t Darmstadt for their cooperation and Dr. Lisa Beinborn for providing us with the code for our reproduction study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Predicting the Difficulty of Language Proficiency Tests", "authors": [ { "first": "Lisa", "middle": [], "last": "Beinborn", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "517--529", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2014. Predicting the Difficulty of Language Pro- ficiency Tests. Transactions of the Association for Computational Linguistics, 2:517-529.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Predicting and manipulating the difficulty of text-completion exercises for language learning", "authors": [ { "first": "Lisa", "middle": [ "Marina" ], "last": "Beinborn", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Marina Beinborn. 2016. Predicting and manipu- lating the difficulty of text-completion exercises for language learning. Ph.D. thesis, Technische Univer- sit\u00e4t Darmstadt.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Motivations and methods for text simplification", "authors": [ { "first": "Raman", "middle": [], "last": "Chandrasekar", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Doran", "suffix": "" }, { "first": "Bangalore", "middle": [], "last": "Srinivas", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 16th International Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "1041--1044", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th Inter- national Conference on Computational Linguistics (COLING): Volume 2, pages 1041-1044, Copen- hagen, Denmark.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Are C-tests valid measures for L2 vocabulary research? Second Language Research", "authors": [ { "first": "C", "middle": [ "A" ], "last": "Chapelle", "suffix": "" } ], "year": 1994, "venue": "", "volume": "10", "issue": "", "pages": "157--187", "other_ids": { "DOI": [ "10.1177/026765839401000203" ] }, "num": null, "urls": [], "raw_text": "C. A. Chapelle. 1994. Are C-tests valid measures for L2 vocabulary research? Second Language Re- search, 10(2):157-187.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cloze method: what difference does it make?", "authors": [ { "first": "Carol", "middle": [ "A" ], "last": "Chapelle", "suffix": "" }, { "first": "Roberta", "middle": [ "G" ], "last": "Abraham", "suffix": "" } ], "year": 1990, "venue": "", "volume": "7", "issue": "", "pages": "121--146", "other_ids": { "DOI": [ "10.1177/026553229000700201" ] }, "num": null, "urls": [], "raw_text": "Carol A. Chapelle and Roberta G. Abraham. 1990. Cloze method: what difference does it make? Lan- guage Testing, 7(2):121-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Computational assessment of text readability: A survey of current and future research", "authors": [ { "first": "Kevyn", "middle": [], "last": "Collins-Thompson", "suffix": "" } ], "year": 2014, "venue": "International Journal of Applied Linguistics -Special Issue on Recent Advances in Automatic Readability Assessment and Text Simplification", "volume": "165", "issue": "2", "pages": "97--135", "other_ids": { "DOI": [ "10.1075/itl.165.2.01col" ] }, "num": null, "urls": [], "raw_text": "Kevyn Collins-Thompson. 2014. Computational as- sessment of text readability: A survey of current and future research. International Journal of Applied Linguistics -Special Issue on Recent Advances in Automatic Readability Assessment and Text Simplifi- cation, 165(2):97-135.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Presidency Conclusions. Barcelona European Council 15 and 16", "authors": [ { "first": "", "middle": [], "last": "Ec", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "EC. 2002. Presidency Conclusions. Barcelona Euro- pean Council 15 and 16 March 2002. Report SN 100/1/02 REV 1, Council of the European Union.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A standard corpus of edited present-day american english", "authors": [ { "first": "W", "middle": [], "last": "", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Francis", "suffix": "" } ], "year": 1965, "venue": "College English", "volume": "26", "issue": "4", "pages": "267--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Nelson Francis. 1965. A standard corpus of edited present-day american english. College English, 26(4):267-273.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Readability classification for german using lexical, syntactic, and morphological features", "authors": [ { "first": "Julia", "middle": [], "last": "Hancke", "suffix": "" }, { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "1063--1080", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hancke, Sowmya Vajjala, and Detmar Meurers. 2012. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 1063- 1080, Mumbai, India.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic generation of context-based fill-in-the-blank exercises using co-occurrence likelihoods and google n-grams", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Simha", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "23--30", "other_ids": { "DOI": [ "10.18653/v1/W16-0503" ] }, "num": null, "urls": [], "raw_text": "Jennifer Hill and Rahul Simha. 2016. Automatic gener- ation of context-based fill-in-the-blank exercises us- ing co-occurrence likelihoods and google n-grams. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 23-30, San Diego, CA, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Tailoring the Test to Fit the Students: Improvement of the C-Test through Classical Item Analysis. Language Laboratory", "authors": [ { "first": "Tadamitsu", "middle": [], "last": "Kamimoto", "suffix": "" } ], "year": 1993, "venue": "", "volume": "30", "issue": "", "pages": "47--61", "other_ids": { "DOI": [ "10.24539/llaj.30.0_47" ] }, "num": null, "urls": [], "raw_text": "Tadamitsu Kamimoto. 1993. Tailoring the Test to Fit the Students: Improvement of the C-Test through Classical Item Analysis. Language Laboratory, 30:47-61.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Corpus-based vocabulary lists for language learners for nine languages. Language Resources and Evaluation", "authors": [ { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Frieda", "middle": [], "last": "Charalabopoulou", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Gavrilidou", "suffix": "" }, { "first": "Janne", "middle": [], "last": "Bondi Johannessen", "suffix": "" }, { "first": "Saussan", "middle": [], "last": "Khalil", "suffix": "" }, { "first": "Sofie", "middle": [ "Johansson" ], "last": "Kokkinakis", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Lew", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "Ravikiran", "middle": [], "last": "Vadlapudi", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Volodina", "suffix": "" } ], "year": 2014, "venue": "", "volume": "48", "issue": "", "pages": "121--163", "other_ids": { "DOI": [ "10.1007/s10579-013-9251-2" ] }, "num": null, "urls": [], "raw_text": "Adam Kilgarriff, Frieda Charalabopoulou, Maria Gavrilidou, Janne Bondi Johannessen, Saussan Khalil, Sofie Johansson Kokkinakis, Robert Lew, Serge Sharoff, Ravikiran Vadlapudi, and Elena Volo- dina. 2014. Corpus-based vocabulary lists for lan- guage learners for nine languages. Language Re- sources and Evaluation, 48(1):121-163.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Complexity of Word Collocation Networks: A Preliminary Structural Analysis", "authors": [ { "first": "Shibamouli", "middle": [], "last": "Lahiri", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "96--105", "other_ids": { "DOI": [ "10.3115/v1/E14-3011" ] }, "num": null, "urls": [], "raw_text": "Shibamouli Lahiri. 2014. Complexity of Word Collo- cation Networks: A Preliminary Structural Analy- sis. In Proceedings of the Student Research Work- shop at the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 96-105, Gothenburg, Sweden.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A vocabulary-size test of controlled productive ability", "authors": [ { "first": "Batia", "middle": [], "last": "Laufer", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Nation", "suffix": "" } ], "year": 1999, "venue": "Language Testing", "volume": "16", "issue": "1", "pages": "33--51", "other_ids": { "DOI": [ "10.1177/026553229901600103" ] }, "num": null, "urls": [], "raw_text": "Batia Laufer and Paul Nation. 1999. A vocabulary-size test of controlled productive ability. Language Test- ing, 16(1):33-51.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Personalized exercises for preposition learning", "authors": [ { "first": "John", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Mengqi", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations", "volume": "", "issue": "", "pages": "115--120", "other_ids": { "DOI": [ "10.18653/v1/P16-4020" ] }, "num": null, "urls": [], "raw_text": "John Lee and Mengqi Luo. 2016. Personalized exer- cises for preposition learning. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (ACL): System Demonstrations, pages 115-120, Berlin, Germany.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "RCV1: A New Benchmark Collection for Text Categorization Research", "authors": [ { "first": "David", "middle": [ "D" ], "last": "Lewis", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tony", "middle": [ "G" ], "last": "Rose", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Li", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "361--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research, 5(Apr):361-397.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploring neural text simplification models", "authors": [ { "first": "Sergiu", "middle": [], "last": "Nisioi", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Sanja\u0161tajner", "suffix": "" }, { "first": "Liviu", "middle": [ "P" ], "last": "Ponzetto", "suffix": "" }, { "first": "", "middle": [], "last": "Dinu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL): Short Papers", "volume": "2", "issue": "", "pages": "85--91", "other_ids": { "DOI": [ "10.18653/v1/P17-2014" ] }, "num": null, "urls": [], "raw_text": "Sergiu Nisioi, Sanja\u0160tajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (ACL): Short Papers, volume 2, pages 85-91, Vancouver, Canada.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Multilingual call framework for automatic language exercise generation from free text", "authors": [ { "first": "Naiara", "middle": [], "last": "Perez", "suffix": "" }, { "first": "Montse", "middle": [], "last": "Cuadros", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL): Software Demonstrations", "volume": "", "issue": "", "pages": "49--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naiara Perez and Montse Cuadros. 2017. Multilin- gual call framework for automatic language exer- cise generation from free text. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL): Software Demonstrations, pages 49-52, Valencia, Spain.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rule-based and machine learning approaches for second language sentence-level readability", "authors": [ { "first": "Elena", "middle": [], "last": "Ildik\u00f3 Pil\u00e1n", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Volodina", "suffix": "" }, { "first": "", "middle": [], "last": "Johansson", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "174--184", "other_ids": { "DOI": [ "10.3115/v1/W14-1821" ] }, "num": null, "urls": [], "raw_text": "Ildik\u00f3 Pil\u00e1n, Elena Volodina, and Richard Johansson. 2014. Rule-based and machine learning approaches for second language sentence-level readability. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 174-184, Baltimore, MD, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "338--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Perfor- mance Study of LSTM-networks for Sequence Tag- ging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 338-348, Copenhagen, Denmark.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "AAA: Arbeiten aus Anglistik und Amerikanistik", "authors": [ { "first": "G\u00fcnther", "middle": [], "last": "Sigott", "suffix": "" } ], "year": 1995, "venue": "", "volume": "20", "issue": "", "pages": "43--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnther Sigott. 1995. The C-Test: Some Factors of Difficulty. AAA: Arbeiten aus Anglistik und Amerikanistik, 20(1):43-53.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "How fluid is the c-test construct?", "authors": [ { "first": "G\u00fcnther", "middle": [], "last": "Sigott", "suffix": "" } ], "year": 2006, "venue": "Der C-Test: Theorie, Empirie, Anwendungen -The C-Test: Theory, Empirical Research, Applications, Language Testing and Evaluation", "volume": "", "issue": "", "pages": "139--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnther Sigott. 2006. How fluid is the c-test construct? In Der C-Test: Theorie, Empirie, Anwendungen - The C-Test: Theory, Empirical Research, Applica- tions, Language Testing and Evaluation, pages 139- 146. Frankfurt am Main: Peter Lang.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Reduced Redundancy as a Language Testing Tool", "authors": [ { "first": "Bernard", "middle": [], "last": "Spolsky", "suffix": "" } ], "year": 1969, "venue": "Applications of linguistics", "volume": "", "issue": "", "pages": "383--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bernard Spolsky. 1969. Reduced Redundancy as a Lan- guage Testing Tool. In G.E. Perren and J.L.M. Trim, editors, Applications of linguistics, pages 383-390. Cambridge: Cambridge University Press.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Cloze Procedure\": A New Tool for Measuring Readability", "authors": [ { "first": "Wilson", "middle": [ "L" ], "last": "Taylor", "suffix": "" } ], "year": 1953, "venue": "", "volume": "30", "issue": "", "pages": "415--433", "other_ids": { "DOI": [ "10.1177/107769905303000401" ] }, "num": null, "urls": [], "raw_text": "Wilson L. Taylor. 1953. \"Cloze Procedure\": A New Tool for Measuring Readability. Journalism Bul- letin, 30(4):415-433.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Assessing the relative reading level of sentence pairs for text simplification", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "288--297", "other_ids": { "DOI": [ "10.3115/v1/E14-1031" ] }, "num": null, "urls": [], "raw_text": "Sowmya Vajjala and Detmar Meurers. 2014. Assessing the relative reading level of sentence pairs for text simplification. In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL), pages 288-297, Gothenburg, Sweden.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Statistical Learning Theory", "authors": [ { "first": "N", "middle": [], "last": "Vladimir", "suffix": "" }, { "first": "", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir N. Vapnik. 1998. Statistical Learning Theory. New York: Wiley.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Mind in society: The development of higher psychological processes", "authors": [ { "first": "Lev", "middle": [], "last": "Vygotsky", "suffix": "" } ], "year": 1978, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Vygotsky. 1978. Mind in society: The develop- ment of higher psychological processes. Cambridge: Harvard University Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bundled gap filling: A new paradigm for unambiguous cloze exercises", "authors": [ { "first": "Michael", "middle": [], "last": "Wojatzki", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "172--181", "other_ids": { "DOI": [ "10.18653/v1/W16-0519" ] }, "num": null, "urls": [], "raw_text": "Michael Wojatzki, Oren Melamud, and Torsten Zesch. 2016. Bundled gap filling: A new paradigm for un- ambiguous cloze exercises. In Proceedings of the 11th Workshop on Innovative Use of NLP for Build- ing Educational Applications (BEA), pages 172- 181, San Diego, CA, USA.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Automatic generation of challenging distractors using contextsensitive inference rules", "authors": [ { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "143--148", "other_ids": { "DOI": [ "10.3115/v1/W14-1817" ] }, "num": null, "urls": [], "raw_text": "Torsten Zesch and Oren Melamud. 2014. Automatic generation of challenging distractors using context- sensitive inference rules. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Build- ing Educational Applications (BEA), pages 143- 148, Baltimore, MD, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Proposed system architecture", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Difficulty distribution of exercises generated with DEF, SEL, and SIZE for extreme \u03c4 values Difficulty range. The black -marked line of figure 3 shows the distribution of d(T ) based on our difficulty prediction system when creating a C-test with the default generation scheme DEF for all our samples of the Brown corpus. The vast majority of C-tests range between 0.15 and 0.30 with a predominant peak at 0.22.", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Notched boxplots for the (a) observed error rates, (b) Likert feedback, and (c) the participants'", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "Error rates per participant and strategy", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "Predicted difficulties d(T ) vs the actual error rates e(T ).", "num": null }, "FIGREF6": { "type_str": "figure", "uris": null, "text": "Figure 6 displays d(T ) in comparison to e(T ) for each individual text and strategy. With the exception of T SEL,inc 2 and T SEL,dec 4", "num": null }, "TABREF1": { "content": "
Original dataNew data
Model\u03c1 RMSE qw\u03ba\u03c1 RMSE qw\u03ba
SVM (original).50 .23 .44---
SVM (reproduced
", "html": null, "text": ") .49 .24 .47 .50 .21 .39 MLP .42 .25 .31 .41 .22 .25 BiLSTM .49 .24 .35 .39 .24 .27", "type_str": "table", "num": null }, "TABREF2": { "content": "
Experiments
", "html": null, "text": "table, the results of the neural architectures are, however, consistently worse than the SVM results. We analyze the RMSE on the train and development sets and observe a low bias, but a high variance. Thus, we conclude that although neural architectures are able to perform well for this task, they lack a sufficient amount of data to generalize. on new data. To validate the results and assess the robustness of the difficulty prediction system, we have acquired a new C-test dataset from our university's language center. 803 participants of placement tests for English courses solved five C-tests (from a pool of 53 different Ctests) with 20 gaps each. Similar to the data used by Beinborn (2016), we use the error rates e(g) for each gap as the d(g) the methods should predict.The right-hand side of table 1 shows the performance of our SVM and the two neural methods. The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data.", "type_str": "table", "num": null }, "TABREF5": { "content": "
hypothesis that (a) the mean error rate, (b) the me-
dian perceived difficulty (Likert feedback), and (c)
the median rank of the manipulated tests equal the
default tests. While the participants have an aver-
age error rate of 0.3 on default C-tests, the T S,dec i
tests are significantly easier with an average error
rate of 0.15 (t = 7
", "html": null, "text": "Mean error rates e(T ) per text and strategy. Results marked with * deviate significantly from DEF", "type_str": "table", "num": null }, "TABREF8": { "content": "", "html": null, "text": "RMSE between the actual difficulty e(T ) and predicted difficulty d(T ) as well as target difficulty \u03c4 .", "type_str": "table", "num": null } } } }