{ "paper_id": "D12-1043", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:24:07.411565Z" }, "title": "An \"AI readability\" formula for French as a foreign language", "authors": [ { "first": "Thomas", "middle": [], "last": "Fran\u00e7ois", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "3401 Walnut Street Suite 400A Room 423 Philadelphia", "postCode": "19104", "region": "PA", "country": "USA" } }, "email": "" }, { "first": "C\u00e9drick", "middle": [], "last": "Fairon", "suffix": "", "affiliation": { "laboratory": "", "institution": "CENTAL", "location": { "addrLine": "UCLouvain Place Blaise Pascal, 1", "postCode": "1348", "settlement": "Louvain-la-Neuve", "country": "Belgium" } }, "email": "cedrick.fairon@uclouvain.be" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper present a new readability formula for French as a foreign language (FFL), which relies on 46 textual features representative of the lexical, syntactic, and semantic levels as well as some of the specificities of the FFL context. We report comparisons between several techniques for feature selection and various learning algorithms. Our best model, based on support vector machines (SVM), significantly outperforms previous FFL formulas. We also found that semantic features behave poorly in our case, in contrast with some previous readability studies on English as a first language.", "pdf_parse": { "paper_id": "D12-1043", "_pdf_hash": "", "abstract": [ { "text": "This paper present a new readability formula for French as a foreign language (FFL), which relies on 46 textual features representative of the lexical, syntactic, and semantic levels as well as some of the specificities of the FFL context. We report comparisons between several techniques for feature selection and various learning algorithms. Our best model, based on support vector machines (SVM), significantly outperforms previous FFL formulas. We also found that semantic features behave poorly in our case, in contrast with some previous readability studies on English as a first language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Whether in a first language (L1) or a second and foreign language (L2), learning to read has been and remains one of the major concerns of education. When a teacher wants to improve his/her students' reading skills, he/she uses reading exercises, whether there are guided or independent. For this practice to be efficient, it is necessary that the texts suit the level of students (O'Connor et al., 2002) . This condition is sometimes difficult to meet for teachers wishing to get off the beaten tracks by not using texts from levelled textbooks or readers.", "cite_spans": [ { "start": 381, "end": 404, "text": "(O'Connor et al., 2002)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this context, readability formulas have long been used to help teachers faster select texts for their students. These formulas are reproducible methods that aim at matching readers and texts relative to their reading difficulty level. The Flesch (1948) and Dale and Chall (1948) formulas are probably the best-known examples of those. They are typical of classic formulas, the first major methodological paradigm developed in the field during the 40's and 50's. They were kept as parsimonious as possible, using linear regression to combined two, or sometimes, three surface features, such as word mean length, sentence mean length, or proportion of outof-simple-vocabulary words.", "cite_spans": [ { "start": 242, "end": 255, "text": "Flesch (1948)", "ref_id": "BIBREF23" }, { "start": 260, "end": 281, "text": "Dale and Chall (1948)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Later, some scholars (Kintsch and Vipond, 1979; Redish and Selzer, 1985) argued that the classic formulas suffer from several shortcomings. These formulas only take into account superficial features, ignoring other important aspects contributing to text difficulty, such as coherence, content density, inference load, etc. They also omit the interactive aspect of the reading process. In the 80's, a second paradigm, inspired by structuro-cognitivist theories, intended to overcome these issues. It focused on higher textual dimensions, such as inference load (Kintsch and Vipond, 1979; Kemper, 1983) , density of concepts (Kintsch and Vipond, 1979) , or macrostructure (Meyer, 1982) . However, these attempts did not achieve better results than the classic approach, even though they used more principled and more complex features.", "cite_spans": [ { "start": 21, "end": 47, "text": "(Kintsch and Vipond, 1979;", "ref_id": "BIBREF37" }, { "start": 48, "end": 72, "text": "Redish and Selzer, 1985)", "ref_id": "BIBREF49" }, { "start": 560, "end": 586, "text": "(Kintsch and Vipond, 1979;", "ref_id": "BIBREF37" }, { "start": 587, "end": 600, "text": "Kemper, 1983)", "ref_id": "BIBREF36" }, { "start": 623, "end": 649, "text": "(Kintsch and Vipond, 1979)", "ref_id": "BIBREF37" }, { "start": 670, "end": 683, "text": "(Meyer, 1982)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, a third paradigm, referred to as the \"AI readability\" by Fran\u00e7ois (2011a) , has emerged in the field. Studies that are part of this current share three key features: the use of a large number of texts assessed by experts (coming from textbooks, simplified newspapers or web resources) as training data ; the use of NPL-enable features able to capture a wider range of readability factors, and the combination of those features through a machine learning algorithm. Since the work of Si and Callan (2001) , this paradigm have spawn several studies for English (Collins-Thompson and Callan, 2005; Heilman et al., 2008; Schwarm and Ostendorf, 2005; Feng et al., 2010) .", "cite_spans": [ { "start": 67, "end": 83, "text": "Fran\u00e7ois (2011a)", "ref_id": "BIBREF27" }, { "start": 493, "end": 513, "text": "Si and Callan (2001)", "ref_id": "BIBREF53" }, { "start": 569, "end": 604, "text": "(Collins-Thompson and Callan, 2005;", "ref_id": "BIBREF11" }, { "start": 605, "end": 626, "text": "Heilman et al., 2008;", "ref_id": "BIBREF33" }, { "start": 627, "end": 655, "text": "Schwarm and Ostendorf, 2005;", "ref_id": "BIBREF52" }, { "start": 656, "end": 674, "text": "Feng et al., 2010)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, for French, the field is far from being so thriving. To our knowledge, only two \"AI readability\" have been designed so far for French L1 and only one for French as a foreign language (FFL) (see Section 2). This paper reports some experiments aimed at designing a more efficient readability model for FFL. In Section 2, it is further argue why a new formula was necessary for FFL. Section 3 covers the various methodological steps required to devise the model, whose results are reported in Section 4. Finally, Section 5 discusses some interesting insights gained by this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Readability of French never enjoyed a large success: while readability studies on English dates back to the 20's, it is only in 1957 that the Frenchspeaking world discovered it through the work of Conquet (1957) . Since then, only a few studies focused on the topic.", "cite_spans": [ { "start": 197, "end": 211, "text": "Conquet (1957)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Readability models for French", "sec_num": "2" }, { "text": "The two first French L1 formulas were adaptations of the Flesch formula (Kandel and Moles, 1958; de Landsheere, 1963) . It is only with Henry (1975) that French got a model fitting the particularities of the language. Henry used cloze tests to assess the level of 60 texts from primary and secondary school textbooks and trained three formulas on this corpus. It is worth mentioning that Henry's formulas have been applied to FFL by Cornaire (1988) . During the same time, Richaudeau explored a different path, as a representative of the structuro-cognitivist paradigm. He used the number of words recalled by a subject after he/she has just read a sentence as a device to measure understanding and provided an \"efficiency formula\" of texts (Richaudeau, 1979) . Although more modern in its conception, Richaudeau's hard-to-implement formula did not achieve the same recognition in the French speaking world as Henry's.", "cite_spans": [ { "start": 72, "end": 96, "text": "(Kandel and Moles, 1958;", "ref_id": "BIBREF35" }, { "start": 97, "end": 117, "text": "de Landsheere, 1963)", "ref_id": "BIBREF19" }, { "start": 136, "end": 148, "text": "Henry (1975)", "ref_id": "BIBREF34" }, { "start": 433, "end": 448, "text": "Cornaire (1988)", "ref_id": "BIBREF14" }, { "start": 741, "end": 759, "text": "(Richaudeau, 1979)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Readability models for French", "sec_num": "2" }, { "text": "After those two major efforts, few works followed. It is worth mentioning two more authors: Mesnager (1989) , who designed a classic formula for children that draw inspiration from the Dale and Chall (1948) formula and Daoust et al. (1996) , who developed SATO-CALIBRAGE, a program assessing text difficulty from the first to the eleventh grade. It can be considered as the first \"AI formula\" for French L1, since it made use of NLP-enabled features. It is also the last formula published for French L1, if we except the adaptation of the model by Collins- Thompson and Callan (2004) to French.", "cite_spans": [ { "start": 92, "end": 107, "text": "Mesnager (1989)", "ref_id": "BIBREF41" }, { "start": 185, "end": 206, "text": "Dale and Chall (1948)", "ref_id": "BIBREF16" }, { "start": 219, "end": 239, "text": "Daoust et al. (1996)", "ref_id": "BIBREF18" }, { "start": 557, "end": 583, "text": "Thompson and Callan (2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Readability models for French", "sec_num": "2" }, { "text": "As regards to French L2, the literature is even sparser. Tharp (1939) published a first formula taking into account one particularity of the L2 context: the cognates. Those are words sharing a similar form and meaning across two languages and having a facilitating effect in reading. This idea was recently replicated by Uitdenbogerd (2005) , who combined a syntactic feature, the mean number of words per sentence, with the number of cognates per 100 words in her formula. Although taking into account this effect of the L1 on L2 reading is very interesting, these two studies are confined to a limited audience: English speakers learning French. As regards a more generic approach, Fran\u00e7ois (2009) recently published an \"AI formula\" for FFL, based on logistic regression and ten features. Among those, he stressed the use of verbal tense information as a way to improve performance. However, the set of features he experimented remains limited (about 20).", "cite_spans": [ { "start": 57, "end": 69, "text": "Tharp (1939)", "ref_id": "BIBREF56" }, { "start": 321, "end": 340, "text": "Uitdenbogerd (2005)", "ref_id": "BIBREF58" }, { "start": 684, "end": 699, "text": "Fran\u00e7ois (2009)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Readability models for French", "sec_num": "2" }, { "text": "From all this, it seems clear that FFL readability needs to be addressed more thoroughly, especially if we are willing to get a generic model, able to make predictions for L2 readers with any L1 background. The rest of this paper describes one such attempt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Readability models for French", "sec_num": "2" }, { "text": "The design of an \"AI readability\" formula involves the same three steps as a classification problem. First, one need to gather a gold-standard corpus large enough to reliably train the parameters of a learning algorithm, as described in Section 3.1. The next step, covered in Section 3.2, consists in defining a set of predictors, that is to say, linguistic characteristics of the texts that will be used to predict the difficulty level of new texts. Finally, the best subset of these predictors is combined within a learning algorithm to obtain the best model possible. Experiments at this level are reported in Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design of the formula", "sec_num": "3" }, { "text": "A gold-standard for readability consists in texts labelled according to their difficulty. For this, it is first necessary to choose a difficulty scale used for the labels (for English L1, it is usually the 12 grade levels scale), that also constrains the output of the formula. Then, each text have to be assessed with a method able to measure the reading comprehension level of the target population.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "Regarding the scale, an obvious choice for the foreign language context was the beginner/intermediate/advanced continuum, recently redefined in the Common European Framework of Reference for Languages (CEFR) (Council of Europe, 2001 ) as the six following levels: A1 (Breakthrough); A2 (Waystage); B1 (Threshold); B2 (Vantage); C1 (Effective Operational Proficiency) and C2 (Mastery). This scale has now become the reference for foreign language education, at least in Europe.", "cite_spans": [ { "start": 220, "end": 232, "text": "Europe, 2001", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "Assessing the reading difficulty of texts with respect to a target population of readers was a more challenging issue. Several techniques have been used in the literature, the most important of which are comprehension tests, cloze tests and expert judgements. They all postulate a given population of readers, although relying on expert judgements save the need for a sample of subjects to take a test. In this case, texts comes from textbooks whose content difficulty have been assessed by the publishers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "This last criterion is now mainstream in \"AI readability\", since it is very practical and facilitates the creation of a large corpus, but it has its own shortcomings. Studies such as van Oosten et al. 2011found that expert agreement on a same corpus of texts might be insufficient for a classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "For this study, we nevertheless relied on expert judgements, since we needed a large amount of labelled texts to ensure a robust statistical learning. We selected 28 FFL textbooks, published after 2001 and designed for adults or adolescents learning FFL for general purposes. From those, we extracted 2,160 texts related to a reading comprehension task and assigned to each of them the same level as the textbook it came from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "As it was expected from van Oosten et al. (2011)'s study, differences in the publishers' conception of difficulty led to an heterogeneous labelling between textbooks. This heterogeneity was detected in three of the six levels (A1, A2, and B1) using ANOVA based on two classic readability features as independent variables: the mean number of words per sentence and the mean number of letters per word. A subsequent qualitative analysis revealed that most of the heterogeneity was coming from textbooks following the new didactic approach recommended by the CEFR: the task-oriented approach, which focuses more on the task than the text when labelling the overall reading activity. Therefore, we decided to remove those type of textbooks from our corpus, which amounted to 5 books and 249 texts. The remaining 1,852 excerpts were kept for our experiments. Their distribution is displayed in Table 1 as regard to the number of texts and tokens.", "cite_spans": [], "ref_spans": [ { "start": 890, "end": 897, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "The corpus", "sec_num": "3.1" }, { "text": "In a second step, every text of the corpus was represented as a numeric vector of 406 features, each of them representing a linguistic dimension of the text as a single number. Their implementation drew on two different sources of inspiration: the existing predictors in the English and French literature and the psycholinguistic literature on the reading process. The complete set was classified in four families, depending on the kind of information each one is supposed to represent. These families were: \"lexical\", \"syntactic\", \"semantic\", and \"specific to FFL context\". Each of them was further divided in subfamilies, described in the rest of the section 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The predictors", "sec_num": "3.2" }, { "text": "Lexical features have been shown to be the most important level of information in many readability studies (Chall and Dale, 1995; Lorge, 1944) . It is then not surprising that a wide range of lexical predictors have been developed in the literature. Our own set comprised the following subfamilies:", "cite_spans": [ { "start": 107, "end": 129, "text": "(Chall and Dale, 1995;", "ref_id": "BIBREF8" }, { "start": 130, "end": 142, "text": "Lorge, 1944)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": "3.2.1" }, { "text": "Statistics of lexical frequencies: frequencies of words in a text are a good indicator of the text's overall difficulty (Stenner, 1996) . They are usually summarized via the mean, but we also tested the median, the interquartile range, as well as the 75 th and 90 th percentiles. We used Lexique3 (New et al., 2007) as our frequency database. It is a lexicon including about 50,000 lemmas and 125,000 inflected forms whose frequencies were obtained from movie subtitles. Since French has a rich morphology, we considered the probabilities of both lemma and inflected forms. Moreover, following an idea from Elley (1969) , we also computed the above mentioned statistics for given POS words, such as content word, nouns, verbs, etc.", "cite_spans": [ { "start": 120, "end": 135, "text": "(Stenner, 1996)", "ref_id": "BIBREF55" }, { "start": 297, "end": 315, "text": "(New et al., 2007)", "ref_id": "BIBREF45" }, { "start": 607, "end": 619, "text": "Elley (1969)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": "3.2.1" }, { "text": "Percentage of words not in a reference list: part of the Dale and Chall (1948) 's formula, this feature is one of the most famous in readability. For our experiments, two word lists for FFL were used: the well-known -but already dated - Gougenheim et al. (1964) 's list and a second one that was found at the end of one FFL textbook: Alter Ego (Berthet et al., 2006) . Different sizes were also experimented for both lists.", "cite_spans": [ { "start": 57, "end": 78, "text": "Dale and Chall (1948)", "ref_id": "BIBREF16" }, { "start": 237, "end": 261, "text": "Gougenheim et al. (1964)", "ref_id": null }, { "start": 344, "end": 366, "text": "(Berthet et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": "3.2.1" }, { "text": "Word length: mean word length is another classic feature in readability (Flesch, 1948; Smith, 1961) . We used various statistics based on the number of letters per word (mean, median, percentiles, etc.). Si and Callan (2001) shown that n-grams models can successfully be applied to readability. We then used both a simple unigram approach based on the frequencies from Lexique3, and a more complex bigram model trained on two different corpora: the Google n-grams (Michel et al., 2011 ) and a corpus of newspaper articles from Le Soir amounting to 5, 000, 000 words 2 . Both were normalized according the length n of the text as follows:", "cite_spans": [ { "start": 72, "end": 86, "text": "(Flesch, 1948;", "ref_id": "BIBREF23" }, { "start": 87, "end": 99, "text": "Smith, 1961)", "ref_id": "BIBREF54" }, { "start": 204, "end": 224, "text": "Si and Callan (2001)", "ref_id": "BIBREF53" }, { "start": 464, "end": 484, "text": "(Michel et al., 2011", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical Features", "sec_num": "3.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (text) = 1 n n i=1 log P (w i |h)", "eq_num": "(1)" } ], "section": "N-grams models:", "sec_num": null }, { "text": "where w i is the i th word and h a limited history of length 0 (unigram) or 1 (bigram).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "N-grams models:", "sec_num": null }, { "text": "2 Smoothing algorithms used were respectively the simple Good-Turing algorithm (Gale and Sampson, 1995) for unigrams and linear interpolation (Chen and Goodman, 1999) for the bigrams.", "cite_spans": [ { "start": 79, "end": 103, "text": "(Gale and Sampson, 1995)", "ref_id": "BIBREF29" }, { "start": 142, "end": 166, "text": "(Chen and Goodman, 1999)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "N-grams models:", "sec_num": null }, { "text": "Lexical diversity: the repetition effect is another factor known to affect the reading process (Bowers, 2000) . It has been mainly implemented through the classic type-token ratio (TTR) that suffers from being dependent on the text length. This is why we defined a normalized TTR, which is the mean score of several TTRs, computed on text's fragments of equal length. This way, long texts were made comparable with short ones.", "cite_spans": [ { "start": 95, "end": 109, "text": "(Bowers, 2000)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "N-grams models:", "sec_num": null }, { "text": "Orthographic neighborhood: we finally suggested a new lexical variable, based on the fact that some characteristics of the orthographic neighbors 3 of a word are known to impact the reading of this word (Andrews, 1997) . Thirteen predictors were implemented to account for the number or the frequency of the orthographic neighbors of all words in a text.", "cite_spans": [ { "start": 203, "end": 218, "text": "(Andrews, 1997)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "N-grams models:", "sec_num": null }, { "text": "The syntactic level of information is another traditional area of investigation in readability. Although most of the scholars in the field agree that it does not lead to such efficient predictors as the lexical level, they have noticed it can be combined with the latter to improve performance of readability formulas. We therefore investigated the following subfamilies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic features", "sec_num": "3.2.2" }, { "text": "Sentence length: the traditional approach to syntactic difficulty relied on the number of words per sentence. We have approached it through various statistics such as the mean, the median, or several percentiles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic features", "sec_num": "3.2.2" }, { "text": "Part of speech ratios: Bormuth (1966) demonstrated the good predictive power of some POS ratios in a text. We computed 156 ratios based on TreeTagger's POS (Schmid, 1994) . They operated as proxies for the syntactic complexity of sentences, since we did not use features based on a parser 4 .", "cite_spans": [ { "start": 23, "end": 37, "text": "Bormuth (1966)", "ref_id": "BIBREF5" }, { "start": 156, "end": 170, "text": "(Schmid, 1994)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic features", "sec_num": "3.2.2" }, { "text": "Verbs: although the tense and moods found in a text have been hardly considered in the field, Carreiras et al. (1997) suggested that verbal aspects are important while building a mental representation of a text and therefore impact its understanding. They help the reader to distinguish between major and minor elements associated with events described by these verbs. We therefore replicated and enhanced the feature set proposed by Fran\u00e7ois (2009) , considering either binary indicators or proportions of the use of tenses or moods in a text.", "cite_spans": [ { "start": 94, "end": 117, "text": "Carreiras et al. (1997)", "ref_id": "BIBREF7" }, { "start": 434, "end": 449, "text": "Fran\u00e7ois (2009)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic features", "sec_num": "3.2.2" }, { "text": "The importance of semantic and cognitive factors have been particularly stressed by the structuro-cognitivist paradigm, although Miller and Kintsch (1980) , as well as Kemper (1983) , eventually admitted not being able to demonstrate the superiority of those new predictors over traditional ones. More recent work also reported limited evidence of this alleged superiority (Pitler and Nenkova, 2008; Feng et al., 2010) . In order to clarify as much as possible the situation for FFL, we implemented the following features:", "cite_spans": [ { "start": 129, "end": 154, "text": "Miller and Kintsch (1980)", "ref_id": "BIBREF44" }, { "start": 168, "end": 181, "text": "Kemper (1983)", "ref_id": "BIBREF36" }, { "start": 373, "end": 399, "text": "(Pitler and Nenkova, 2008;", "ref_id": "BIBREF48" }, { "start": 400, "end": 418, "text": "Feng et al., 2010)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic features", "sec_num": "3.2.3" }, { "text": "Personnalization level: Dale and Tyler (1934) suggested that informal texts should be easier to read and that informality might be assessed through the type of personal pronouns found in a text. On this assumption, 13 variables were defined to take into account various personal pronouns proportions in the text.", "cite_spans": [ { "start": 24, "end": 45, "text": "Dale and Tyler (1934)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic features", "sec_num": "3.2.3" }, { "text": "Conceptual density: Kintsch et al. (1975) showed that the number of propositions as well as the number of different arguments in a sentence influence its reading time. Following Kintsch's propositional model, we used Densid\u00e9es (Lee et al., 2010) to capture conceptual complexity. It is a program able to estimate the mean number of propositions per word in a text using 35 rules relying on lexical and POS clues.", "cite_spans": [ { "start": 20, "end": 41, "text": "Kintsch et al. (1975)", "ref_id": "BIBREF38" }, { "start": 227, "end": 245, "text": "(Lee et al., 2010)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic features", "sec_num": "3.2.3" }, { "text": "in its corpus, noticed that features based on parse trees were less efficient than classic ones, such as sentence length or part of speech ratios. Therefore, it seemed unlikely that the information collected by means of syntactic parsers, which are still committing a significant number of errors, at least for French, would belie these findings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic features", "sec_num": "3.2.3" }, { "text": "Lexical cohesion : the level of cohesion in a text was measured as the average cosine of all pair of adjacent sentences in the text. Each sentence was represented by a numeric weighted vector (based on words) and projected in a vector space. As suggested by Foltz and al. (1998) , two methods were used to define the vector space and weight every word: the tf-idf (term frequency-inverse document frequency) and the latent semantic analysis (LSA). The first approach, called \"word overlap\", corresponds to the \"noun overlap\" defined by Graesser et al. (2004, 199) , except that all type of POS are taken into account. For LSA, we applied a singular value decomposition (SVD), and after comparing various sizes with a cross-validation procedure, we retained a small 15-dimensional space.", "cite_spans": [ { "start": 258, "end": 278, "text": "Foltz and al. (1998)", "ref_id": "BIBREF24" }, { "start": 536, "end": 563, "text": "Graesser et al. (2004, 199)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic features", "sec_num": "3.2.3" }, { "text": "Apart from the effect of cognates (Uitdenbogerd, 2005; Tharp, 1939) , few features specific to the L2 context were previously investigated. It is probably because such an approach requires to train a model for each pair of language of interest and gather suitable data for evaluation. Since our study intended to design a generic model, we focused on specific predictors affecting L2 reading, whatever the learner's mother tongue is:", "cite_spans": [ { "start": 34, "end": 54, "text": "(Uitdenbogerd, 2005;", "ref_id": "BIBREF58" }, { "start": 55, "end": 67, "text": "Tharp, 1939)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Features specific to FFL", "sec_num": "3.2.4" }, { "text": "Multi-word expressions (MWE): MWEs are acknowledged to cause problems to L2 learners for production (Bahns and Eldaw, 1993) . However, the effect of MWE on the reception side remains unclear, especially for beginners. Ozasa et al. (2007) tested the mean of the absolute frequency of all MWEs in a text as an indication of its difficulty, but it appeared non significant. In a latter experiment involving a larger set of MWE-based predictors, Fran\u00e7ois and Watrin (2011) detected a significant, but limited effect. We therefore replicated this set, which includes variables based on the frequencies of MWE, their syntactic structure, their number or their length. Frequencies were estimated on the same corpora as the bigram model described above (Google and Le Soir).", "cite_spans": [ { "start": 100, "end": 123, "text": "(Bahns and Eldaw, 1993)", "ref_id": "BIBREF2" }, { "start": 218, "end": 237, "text": "Ozasa et al. (2007)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Features specific to FFL", "sec_num": "3.2.4" }, { "text": "Type of text: Finally, we defined five simple variables aiming at identifying dialogues, such as presence of commas, ratio of punctuation, etc. as suggested by Henry (1975) . This focus on dialogue was Table 2 : Spearman correlation for some predictors in our set with difficulty. A positive correlation means that the difficulty of texts increases with the value of the predictor. Signification levels are the following 1 : < 0.05; 2 : < 0.01; and 3 : < 0.001.", "cite_spans": [ { "start": 160, "end": 172, "text": "Henry (1975)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 202, "end": 209, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Features specific to FFL", "sec_num": "3.2.4" }, { "text": "explained by their extensive use in foreign language teaching, especially in the first levels. Furthermore, even for L1, various scholars stressed the fact that dialogues are often written in a simpler style and have a more mundane content (Dolch, 1948; Flesch, 1948) .", "cite_spans": [ { "start": 240, "end": 253, "text": "(Dolch, 1948;", "ref_id": "BIBREF20" }, { "start": 254, "end": 267, "text": "Flesch, 1948)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Features specific to FFL", "sec_num": "3.2.4" }, { "text": "The last step in the development of our formula was to select the most informative subset of features and combine them in a state-of-the-art machine learning algorithm. The algorithms originally considered were six: multinomial and ordinal logistic regression (respectively MLR and OLR), classification trees, bagging, boosting (both based on decision trees) and support vector machine (SVM). However, since the logistic models and the SVM clearly outperformed the others three, we will reported only about those in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The algorithms", "sec_num": "3.3" }, { "text": "The experiments based on this methodology were twofold. First, we assessed the predictive power of each of the 406 features, considered in a bivariate relationship with difficulty. Second, we selected various subsets of features for training models and compared their performance. The two next sections summarize the main findings obtained during these two steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Spearman correlation was used to assess the efficiency of each predictor, to better account for nonlinear relationships with the criterion. Values for some variables among the four families are reported in Table 2 . In accordance with the literature, it appeared that the best family of predictors were the lexical one, followed by the syntactic one. On the contrary, semantic and specific to FFL features did not perform so well, with the exception of the LSAbased feature (avLocalLsa-Lem).", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "The efficiency of predictors", "sec_num": "4.1" }, { "text": "Of all predictors, the best was surprisingly PA-Alterego, a list-based variable inspired by Dale and Chall (1948) , but adapted to the FFL context, since the list of easy words used came from a FFL textbook (Alter Ego 1). This suggests that, although the predictive power of \"specific to FFL\" features was low, specialization to the FFL context was beneficial at other levels.", "cite_spans": [ { "start": 92, "end": 113, "text": "Dale and Chall (1948)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "The efficiency of predictors", "sec_num": "4.1" }, { "text": "Once the best single predictors were identified, it was possible to combine several of them in a readability model for comparison. This required some corpus preparation. Since preliminary experiments showed that the equal prior probabilities are required to ensure a unbiased training, the whole corpus was resampled to get the same number of texts per level (108), which amounted to a total of 648 texts. We then split this smaller corpus into two sets. 240 texts were kept for development purposes, mainly feature selection and estimation of the meta-parameters \u03b3 and C for the SVM. The remaining 408 texts were used for evaluating performance of our readability models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The models", "sec_num": "4.2" }, { "text": "Several ways of selecting the smallest \"best\" subset of features were compared, given that some variables are partly redundant when combined together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of the features", "sec_num": "4.2.1" }, { "text": "The first method was based on the structuro-cognitivist assumption that readability formulas should include other features than just lexicosyntactical ones, in order to maximize variety of information. Therefore, we tried an \"expert\" selection, keeping either the best feature among each of the four families (set Exp1), or the two best features (set Exp2) 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selection of the features", "sec_num": "4.2.1" }, { "text": "These \"expert\" approaches were compared to an automatic selection, using either a stepwise procedure 6 for logistic regression (OLR and MLR) or a built-in regularization (Bishop, 2006, 10) for the SVM, based on the 46 best predictors inside each subfamily.", "cite_spans": [ { "start": 170, "end": 188, "text": "(Bishop, 2006, 10)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Selection of the features", "sec_num": "4.2.1" }, { "text": "For the sake of comparison, we also defined two other sets: one that corresponds to a random classification (the empty subset), and a baseline, based on two classics predictors (number of letters per word and number of words per sentence), which aimed to mimic classic formulas such as those of 5 For the syntactic level, since the two best variables belonged to the same subfamily (see Section 3.2) and were too highly intercorrelated, the 90 th percentile of the sentence length (NWS90) was replaced by the best feature from another subfamily: the presence of at least one present participle (PPres).", "cite_spans": [ { "start": 295, "end": 296, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Selection of the features", "sec_num": "4.2.1" }, { "text": "6 In order to suppress as much random effects as possible, the selection process was repeated 100 times via a bootstrapping .632 procedure (Tuff\u00e9ry, 2007, 396-371) and only the features selected at least 50 times out of 100 were kept. Flesch (1948) or Dale and Chall (1948) . A summary of the features included in each subset is available in Table 3 .", "cite_spans": [ { "start": 139, "end": 163, "text": "(Tuff\u00e9ry, 2007, 396-371)", "ref_id": null }, { "start": 235, "end": 248, "text": "Flesch (1948)", "ref_id": "BIBREF23" }, { "start": 252, "end": 273, "text": "Dale and Chall (1948)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 342, "end": 349, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Selection of the features", "sec_num": "4.2.1" }, { "text": "The next step then consisted in training logistic and SVM models for each of the above subsets. Their performances, reported in Table 4 , were assessed using five measures: the multiple correlation ratio (R), the accuracy (acc), the adjacent accuracy 7 (adjacc), the root mean square error (rmse) and the mean absolute error (mae). It should be noted that each of these measures was estimated through a tenfold cross-validation procedure, which allowed us to compare performances of different models with a Ttest.", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of the models", "sec_num": "4.2.2" }, { "text": "The comparison between the models was performed in two steps. First, we computed T-tests based on adjacc to compare the models based on a same set of features (either Exp1, Exp2, or Auto). This allowed us to pick up the best classifier for each set. In a second step, these three best models were compared the same way, which resulted in the selection of the very best classifier. The decision of adopting the adjacent accuracy as a criterion instead of the accuracy was motivated by our conviction that our system should rather avoid serious errors (i.e. larger than one level) than be more accurate, while sometimes generating terrible mistakes. However, it appeared that both metrics were mostly consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the models", "sec_num": "4.2.2" }, { "text": "The performance of the different models are displayed in Table 4 . It is first interesting to note that the baseline (based on SVM) already gives interesting results. It reaches a classification accuracy of 34%, which is about twice the random. As regards the first model (Exp1), based on RLM and including four predictors, it outperforms the baseline by 5%, a difference close to significance (t(9) = 1.77; p = 0.055). Therefore, combining variables from several families seems to improve performance over the \"classic\" baseline, limited to lexico-syntactic features.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluation of the models", "sec_num": "4.2.2" }, { "text": "This finding is reinforced by the SVM model from Exp2, which includes eight features. It performs significantly better than the baseline (t(9) = Model name Classifieur", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the models", "sec_num": "4.2.2" }, { "text": "OLR, MLR and SVM PA-Alterego + NMP + avLocalLsa-Lem + BINGUI Exp2 OLR, MLR and SVM PA-Alterego + X90FFFC + NMP + PPres + avLocalLsa-Lem + PP1P2 + BINGUI + NAColl Auto-OLR OLR PA-Alterego + NMP + PPres + ML3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Set of features Exp1", "sec_num": null }, { "text": "Auto-MLR MLR PA-Alterego + Cond + Imperatif + Impf + PPasse + PPres + Subi + Subp + BINGUI + TTR + NWS90 + LSDaoust + MedNeigh+Freq Auto-SVM SVM all the 46 variables Table 3 : Results from the two selection process: expert and automatic. Description of the features can be found in Table 2 . Table 4 : Evaluation measures for the best difficulty model from each feature set (Exp1, Exp2 and Auto), along with values for a random classification, and the \"classic\" baseline.", "cite_spans": [], "ref_spans": [ { "start": 166, "end": 173, "text": "Table 3", "ref_id": null }, { "start": 282, "end": 289, "text": "Table 2", "ref_id": null }, { "start": 292, "end": 299, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Set of features Exp1", "sec_num": null }, { "text": "2.36; p = 0.02), with an accuracy gain of 7%. However, to that point, it was not clear whether this superiority was indeed a consequence of maximizing the kind of information brought to the model or merely the result of the increased number of predictor. We thus performed another experiment to address this issue. The model Exp1 was compared with Auto-OLR, the best ordinal logistic model obtained through the stepwise selection (see Tables 4 and 3) , and previously discarded as a result of the Ttest comparisons. Like Exp1, it also contains four predictors, but they are all lexical or syntactic features. Therefore, this model does not maximize the type of information. Surprisingly, we observed that Auto-OLR obtained similar and even slightly better performance than Exp1 (+2% for both acc and adjacc). Thus, the claim that maximizing the source of information should yield better models did not stand on our data.", "cite_spans": [], "ref_spans": [ { "start": 435, "end": 451, "text": "Tables 4 and 3)", "ref_id": null } ], "eq_spans": [], "section": "Set of features Exp1", "sec_num": null }, { "text": "Finally, our best performing model was based on the Auto feature set and SVM. Its accuracy was increased by 8% in comparison with the Exp2 model, which is clearly a significant improvement (t(9) = 2.61; p = 0.01), and outperformed the baseline by 15%. As mentioned previously, this model includes 46 features coming from our four families. It is worth mentioning that the quality of the predictions is not the same across the levels, as shown in Table 5. They are more accurate for classes situated at both ends of the difficulty scale, namely A1, C1 and C2. For A1, this is explained because texts for beginners are more typical, having very short sentences and simple words. However, the case of C1 and C2 classes is more surprising and might be due to some specificities of the learning algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Set of features Exp1", "sec_num": null }, { "text": "A2 B1 B2 C1 C2 Adj. acc. 100% 71% 67% 71% 86% 83% Table 5 : Adjacent accuracy per level, computed on one of the 10 folds. Its adjacent accuracy was 79%, which is very similar to the average value of the model. We also assessed the specific contribution of each family of features in two ways: on one hand, we trained a model including only the features from this family; on the other hand, we trained a model including all features except those from this family. Results for the four families are displayed at Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 5", "ref_id": null }, { "start": 510, "end": 517, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "A1", "sec_num": null }, { "text": "It appeared that the lexical family was the most accurate set of predictors (40.5%) and yielded the highest loss in performance when set aside, especially for adjacent accuracy. In fact, this was the only set whose absence significantly impacted adjacent accuracy, suggesting that the other type of predictors can only improve the accuracy of predictions, but are not able to reduce the amount of critical mistakes. The second best family was, expectedly, the syntactic one. Its accuracy closely match that of the lexical set, although more severe mistakes were made, as shown by the drop in adjacent accu-racy. Finally, our two other families was clearly inferior, but they still improved slightly the accuracy of our model, although not the adjacent accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A1", "sec_num": null }, { "text": "All except family Acc. Adj. acc. Acc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Family only", "sec_num": null }, { "text": "Adj ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Family only", "sec_num": null }, { "text": "Comparisons with other FFL models are difficult to provide: not only there are few formulas available for FFL, but some of these focus on a different audience, making comparability low. This is why we were able to compare our results with only two previous models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "The first of them is a classic readability formula by Kandel and Moles (1958) , which is an adaptation of the Flesch (1948) formula for French:", "cite_spans": [ { "start": 54, "end": 77, "text": "Kandel and Moles (1958)", "ref_id": "BIBREF35" }, { "start": 110, "end": 123, "text": "Flesch (1948)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "Y = 207 \u2212 1.015lp \u2212 0.736lm (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "where Y is a readability score ranging from 100 (easiest) to 0 (harder); lp is the average number of words per sentence and lm is the average number of syllables per 100 words. Although it was not designed for FFL, we considered it, since it is one of the most well-known formula for French and the two features combined are very general. Their predictive power should not vary much in both contexts, as shown by Greenfield (2004) for English. We evaluated it on the same test corpus as our SVM model and obtained really lower values : a R of 0.55 and an accuracy of 33%.", "cite_spans": [ { "start": 413, "end": 430, "text": "Greenfield (2004)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "The second model was that of Fran\u00e7ois (2009) , which is based on a multinomial logistic regression including ten features: a unigram model similar to ML3, the number of letters per word, the number of words per sentence, and binary variables indicating the presence of a past participle, present participle, imperfect, infinitive, conditional, future and present subjunctive tenses in the text. To our knowledge, this model is the best current generic model available for FFL. On our data, it yielded an accuracy of 41% and an adjacent accuracy of 72.7%, both estimated through a 10-fold cross-validation procedure. Therefore, our new approach achieved an accuracy gain of 8% over this state-of-the-art model, which was considered as a significant difference (t(9) = 3.72; p = 0.002).", "cite_spans": [ { "start": 29, "end": 44, "text": "Fran\u00e7ois (2009)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "Apart of those two studies, Uitdenbogerd (2005) also developed recently a FFL formula. However, as explained previously, this work focused on a specific category of L2 readers, the English-speakers learning FFL, which resulted in a different problem. She reported a higher R than us (0.87 against 0.73). However, this value might be the training one and was estimated on a small amount of novel beginnings. It is therefore likely that our model generalize better, especially across genres and L2 readers with different L1 backgrounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparaison with previous work", "sec_num": "4.2.3" }, { "text": "In this paper, we introduced a new \"AI readability\" formula for FFL, able to predict the level of texts according to the largely-spread CEFR scale. Our model is based on a SVM classifier and combines 46 features corresponding to several levels of linguistic information. Among those, we suggested some new features: the normalized TTR and the set of variables based on several characteristics of words' neighbors. Comparing our approach with two previously published formulas, our model significantly outperformed both these works. Therefore, it represent a robust generic solution for FFL readers willing to find various kind of texts that suit their linguistic abilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "Besides the creation of a new FFL readability formula, this study produced two valuable insights. First, we showed that maximizing the type of linguistic information might not be the best path to go, since a model based on four lexico-syntactic features yielded predictions as accurate as those of a model relying on our Exp1 set of variables. However, this finding might be partly accounted by the lower predictive power of the features from the semantic and specific-to-FFL family, with the notable exception of the LSA-based predictor (avLocalLsa-Lem), which is the third best predictor when considered alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "This leads us to our second finding, relative to the set of semantic features. Yet their importance was largely praised in the structuro-cognitivist paradigm and in most of the recent works, our experiments cast serious doubts about their efficiency, at least in a L2 context. Not only the expert models, to which we imposed the presence of one or two semantic predictors, did not perform the best, but none of the features from our semantic set was retained during the automatic selection of the variables for the logistic models. On the contrary, in some subsets, the LSA-based feature was sometimes considered as collinear with the other variables. Finally and foremost, we showed that dropping the semantic features did not impact significantly the performance of our best model. With reservations one may have because of the limited number of semantic predictors in our set, these results however raise some concerns about whether the information coming from semantic variables is really different from that carried by lexicosyntactic features. Our results clearly show that this may not be the case. This conclusion contradicts the assumptions of the structuro-cognitivist paradigm, but corroborates Chall and Dale (1995) 's view that the information carried by semantic predictors is largely correlated with that of lexicosyntactical ones.", "cite_spans": [ { "start": 1206, "end": 1227, "text": "Chall and Dale (1995)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "Further investigation on this issue would definitely be worthwhile, since several facts could explain these contradictory findings. First, it might be that semantic and lexical predictors are correlated because the methods used for the parameterization of the semantic factors heavily relie on lexical information. This is the case for the LSA, as well as for the propositional approach of the content density.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "Alternatively, this difference with other work in L1 could be due to the L2 context. Chall and Dale (1995) explained that the lexicon and the syntax are more important for children learning to read than for more advanced readers, who then become more sensitive to organisationnal aspects. From the threshold hypothesis (Alderson, 1984) , we know that before reaching a sufficient level of proficiency, L2 learners struggle mostly with the lexicon and the syntactic structures. This might explain why lexico-syntactic predictors were so predominant in our experiments. Some further experiments are thus needed to investigate which of these facts better ac-count for our findings on the semantic features.", "cite_spans": [ { "start": 85, "end": 106, "text": "Chall and Dale (1995)", "ref_id": "BIBREF8" }, { "start": 319, "end": 335, "text": "(Alderson, 1984)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "A last avenue of research worth mentioning would be to develop the family of specific-to-FFL predictors, to determine whether taking into account the impact of a given L1 language on the readability of L2 texts would increase performance over a generic model enough so that tuning efforts are worthwhile.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and conclusion", "sec_num": "5" }, { "text": "Space restrictions did not enable us to formally defined each variable used in this study. The reader may consult Fran\u00e7ois (2011b) for a more comprehensive description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The orthographic neighbors of a word X have been defined byColtheart (1978) as all the words of the same length as X and varying from it only by one letter (eg. FIST and GIST).4 This choice was motivated as follows.Bormuth (1966), who performed a manual annotation of the syntactic structures", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Heilman et al. (2008) defined it as \"the proportion of predictions that were within one level of the human assigned label for the given text\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Thomas Fran\u00e7ois was an Aspirant F.N.R.S. when this work was performed. The writing of this paper was done while being a recipient of a Fellowship of the Belgian American Educational Foundation. We thank both for their support. We would also like to acknowledge the invaluable help of Bernadette Dehottay for the collection of the corpus used in that study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Reading in a foreign language : a reading problem or a language problem", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Alderson", "suffix": "" } ], "year": 1984, "venue": "Reading in a Foreign Language", "volume": "", "issue": "", "pages": "1--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.C. Alderson. 1984. Reading in a foreign language : a reading problem or a language problem ? In J.C. Alderson and A.H Urquhart, editors, Reading in a For- eign Language, pages 1-24. Longman, New York.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The effect of orthographic similarity on lexical retrieval: Resolving neighborhood conflicts", "authors": [ { "first": "S", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 1997, "venue": "Psychonomic Bulletin & Review", "volume": "4", "issue": "4", "pages": "439--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Andrews. 1997. The effect of orthographic similarity on lexical retrieval: Resolving neighborhood conflicts. Psychonomic Bulletin & Review, 4(4):439-461.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Should We Teach EFL Students Collocations? System", "authors": [ { "first": "J", "middle": [], "last": "Bahns", "suffix": "" }, { "first": "M", "middle": [], "last": "Eldaw", "suffix": "" } ], "year": 1993, "venue": "", "volume": "21", "issue": "", "pages": "101--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Bahns and M. Eldaw. 1993. Should We Teach EFL Students Collocations? System, 21(1):101-14.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Pattern recognition and machine learning", "authors": [ { "first": "M", "middle": [], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bishop. 2006. Pattern recognition and machine learning. Springer, New York.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Readability: A new approach", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Bormuth", "suffix": "" } ], "year": 1966, "venue": "Reading research quarterly", "volume": "1", "issue": "3", "pages": "79--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Bormuth. 1966. Readability: A new approach. Reading research quarterly, 1(3):79-132.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "defense of abstractionist theories of repetition priming and word identification. Psychonomic bulletin and review", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Bowers", "suffix": "" } ], "year": 2000, "venue": "", "volume": "7", "issue": "", "pages": "83--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.S. Bowers. 2000. In defense of abstractionist theories of repetition priming and word identification. Psycho- nomic bulletin and review, 7(1):83-99.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The role of verb tense and verb aspect in the foregrounding of information during reading", "authors": [ { "first": "M", "middle": [], "last": "Carreiras", "suffix": "" }, { "first": "N", "middle": [], "last": "Carriedo", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Alonso", "suffix": "" }, { "first": "A", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" } ], "year": 1997, "venue": "Memory & Cognition", "volume": "25", "issue": "4", "pages": "438--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Carreiras, N. Carriedo, M.A. Alonso, and A. Fern\u00e1ndez. 1997. The role of verb tense and verb aspect in the foregrounding of information during reading. Memory & Cognition, 25(4):438-446.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Readability Revisited: The New Dale-Chall Readability Formula", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chall", "suffix": "" }, { "first": "E", "middle": [], "last": "Dale", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.S. Chall and E. Dale. 1995. Readability Revisited: The New Dale-Chall Readability Formula. Brookline Books, Cambridge.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "S", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1999, "venue": "Computer Speech and Language", "volume": "13", "issue": "4", "pages": "359--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Chen and J. Goodman. 1999. An empirical study of smoothing techniques for language modeling. Com- puter Speech and Language, 13(4):359-393.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A language modeling approach to predicting reading difficulty", "authors": [ { "first": "K", "middle": [], "last": "Collins-Thompson", "suffix": "" }, { "first": "J", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2004, "venue": "Proceedings of HLT/NAACL 2004", "volume": "", "issue": "", "pages": "193--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Collins-Thompson and J. Callan. 2004. A language modeling approach to predicting reading difficulty. In Proceedings of HLT/NAACL 2004, pages 193-200, Boston, USA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Predicting reading difficulty with statistical language models", "authors": [ { "first": "K", "middle": [], "last": "Collins-Thompson", "suffix": "" }, { "first": "J", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2005, "venue": "Journal of the American Society for Information Science and Technology", "volume": "56", "issue": "13", "pages": "1448--1462", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Collins-Thompson and J. Callan. 2005. Predict- ing reading difficulty with statistical language models. Journal of the American Society for Information Sci- ence and Technology, 56(13):1448-1462.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lexical access in simple reading tasks", "authors": [ { "first": "M", "middle": [], "last": "Coltheart", "suffix": "" } ], "year": 1978, "venue": "Strategies of information processing", "volume": "", "issue": "", "pages": "151--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Coltheart. 1978. Lexical access in simple reading tasks. In G. Underwood, editor, Strategies of infor- mation processing, pages 151-216. Academic Press, London.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "La lisibilit\u00e9. Assembl\u00e9e Permanente des CCI de Paris", "authors": [ { "first": "A", "middle": [], "last": "Conquet", "suffix": "" } ], "year": 1957, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Conquet. 1957. La lisibilit\u00e9. Assembl\u00e9e Permanente des CCI de Paris, Paris.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "La lisibilit\u00e9 : essai d'application de la formule courte d'Henry au fran\u00e7ais langu\u00e9 etrang\u00e8re", "authors": [ { "first": "M", "middle": [], "last": "Cornaire", "suffix": "" } ], "year": 1988, "venue": "Canadian Modern Language Review", "volume": "44", "issue": "2", "pages": "261--273", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Cornaire. 1988. La lisibilit\u00e9 : essai d'application de la formule courte d'Henry au fran\u00e7ais langu\u00e9 etrang\u00e8re. Canadian Modern Language Review, 44(2):261-273.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Common European Framework of Reference for Languages: Learning, Teaching", "authors": [], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Council of Europe. 2001. Common European Frame- work of Reference for Languages: Learning, Teach- ing, Assessment. Press Syndicate of the University of Cambridge.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A formula for predicting readability", "authors": [ { "first": "E", "middle": [], "last": "Dale", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chall", "suffix": "" } ], "year": 1948, "venue": "Educational research bulletin", "volume": "27", "issue": "1", "pages": "11--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Dale and J.S. Chall. 1948. A formula for predicting readability. Educational research bulletin, 27(1):11- 28.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A study of the factors influencing the difficulty of reading materials for adults of limited reading ability", "authors": [ { "first": "E", "middle": [], "last": "Dale", "suffix": "" }, { "first": "R", "middle": [ "W" ], "last": "Tyler", "suffix": "" } ], "year": 1934, "venue": "The Library Quarterly", "volume": "4", "issue": "", "pages": "384--412", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Dale and R.W. Tyler. 1934. A study of the fac- tors influencing the difficulty of reading materials for adults of limited reading ability. The Library Quar- terly, 4:384-412.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "SATO-CALIBRAGE: Pr\u00e9sentation d'un outil d'assistance au choix et\u00e0 la r\u00e9daction de textes pour l'enseignement. Revue qu\u00e9b\u00e9coise de linguistique", "authors": [ { "first": "F", "middle": [], "last": "Daoust", "suffix": "" }, { "first": "L", "middle": [], "last": "Laroche", "suffix": "" }, { "first": "L", "middle": [], "last": "Ouellet", "suffix": "" } ], "year": 1996, "venue": "", "volume": "25", "issue": "", "pages": "205--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Daoust, L. Laroche, and L. Ouellet. 1996. SATO- CALIBRAGE: Pr\u00e9sentation d'un outil d'assistance au choix et\u00e0 la r\u00e9daction de textes pour l'enseignement. Revue qu\u00e9b\u00e9coise de linguistique, 25(1):205-234.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Pour une application des tests de lisibilit\u00e9 de Flesch\u00e0 la langue fran\u00e7aise", "authors": [ { "first": "G", "middle": [], "last": "De Landsheere", "suffix": "" } ], "year": 1963, "venue": "Le Travail Humain", "volume": "26", "issue": "", "pages": "141--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. de Landsheere. 1963. Pour une application des tests de lisibilit\u00e9 de Flesch\u00e0 la langue fran\u00e7aise. Le Travail Humain, 26:141-154.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Problems in reading", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Dolch", "suffix": "" } ], "year": 1948, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E.W. Dolch. 1948. Problems in reading. The Garrard Press, Champaign : Illinois.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The assessment of readability by noun frequency counts", "authors": [ { "first": "W", "middle": [ "B" ], "last": "Elley", "suffix": "" } ], "year": 1969, "venue": "Reading Research Quarterly", "volume": "4", "issue": "3", "pages": "411--427", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.B. Elley. 1969. The assessment of readability by noun frequency counts. Reading Research Quarterly, 4(3):411-427.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Comparison of Features for Automatic Readability Assessment", "authors": [ { "first": "L", "middle": [], "last": "Feng", "suffix": "" }, { "first": "M", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "M", "middle": [], "last": "Huenerfauth", "suffix": "" }, { "first": "N", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "COLING 2010: Poster Volume", "volume": "", "issue": "", "pages": "276--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Feng, M. Jansche, M. Huenerfauth, and N. Elhadad. 2010. A Comparison of Features for Automatic Read- ability Assessment. In COLING 2010: Poster Volume, pages 276-284.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A new readability yardstick", "authors": [ { "first": "R", "middle": [], "last": "Flesch", "suffix": "" } ], "year": 1948, "venue": "Journal of Applied Psychology", "volume": "32", "issue": "3", "pages": "221--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Flesch. 1948. A new readability yardstick. Journal of Applied Psychology, 32(3):221-233.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The measurement of textual coherence with latent semantic analysis", "authors": [ { "first": "P", "middle": [ "W" ], "last": "Foltz", "suffix": "" }, { "first": "W", "middle": [], "last": "Kintsch", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" } ], "year": 1998, "venue": "Discourse processes", "volume": "25", "issue": "", "pages": "285--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.W. Foltz, W. Kintsch, and T.K. Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse processes, 25(2):285-307.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "On the contribution of MWE-based features to a readability formula for French as a foreign language", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "P", "middle": [], "last": "Watrin", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Conference RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Fran\u00e7ois and P. Watrin. 2011. On the contribution of MWE-based features to a readability formula for French as a foreign language. In Proceedings of the International Conference RANLP 2011.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Combining a statistical language model with logistic regression to predict the lexical and syntactic difficulty of texts for FFL", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the EACL : Student Research Workshop", "volume": "", "issue": "", "pages": "19--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Fran\u00e7ois. 2009. Combining a statistical language model with logistic regression to predict the lexical and syntactic difficulty of texts for FFL. In Proceed- ings of the 12th Conference of the EACL : Student Re- search Workshop, pages 19-27.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "La lisibilit\u00e9 computationnelle : un renouveau pour la lisibilit\u00e9 du fran\u00e7ais langue premi\u00e8re et seconde ?", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2011, "venue": "International Journal of Applied Linguistics (ITL)", "volume": "160", "issue": "", "pages": "75--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Fran\u00e7ois. 2011a. La lisibilit\u00e9 computationnelle : un renouveau pour la lisibilit\u00e9 du fran\u00e7ais langue premi\u00e8re et seconde ? International Journal of Ap- plied Linguistics (ITL), 160:75-99.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Les apports du traitement automatique du langage\u00e0 la lisibilit\u00e9 du fran\u00e7ais langu\u00e9 etrang\u00e8re", "authors": [ { "first": "T", "middle": [], "last": "Fran\u00e7ois", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Fran\u00e7ois. 2011b. Les apports du traitement au- tomatique du langage\u00e0 la lisibilit\u00e9 du fran\u00e7ais langu\u00e9 etrang\u00e8re. Ph.D. thesis, Universit\u00e9 Catholique de Lou- vain. Thesis Supervisors : C\u00e9drick Fairon and Anne Catherine Simon.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Good-Turing frequency estimation without tears", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "G", "middle": [], "last": "Sampson", "suffix": "" } ], "year": 1995, "venue": "Journal of Quantitative Linguistics", "volume": "2", "issue": "3", "pages": "217--237", "other_ids": {}, "num": null, "urls": [], "raw_text": "W.A. Gale and G. Sampson. 1995. Good-Turing fre- quency estimation without tears. Journal of Quantita- tive Linguistics, 2(3):217-237.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Coh-Metrix: Analysis of text on cohesion and language", "authors": [ { "first": "A", "middle": [ "C" ], "last": "Graesser", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Louwerse", "suffix": "" }, { "first": "Z", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2004, "venue": "Behavior Research Methods, Instruments, & Computers", "volume": "36", "issue": "2", "pages": "193--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.C. Graesser, D.S. McNamara, M.M. Louwerse, and Z. Cai. 2004. Coh-Metrix: Analysis of text on co- hesion and language. Behavior Research Methods, In- struments, & Computers, 36(2):193-202.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Readability formulas for EFL. Japan Association for Language Teaching", "authors": [ { "first": "J", "middle": [], "last": "Greenfield", "suffix": "" } ], "year": 2004, "venue": "", "volume": "26", "issue": "", "pages": "5--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Greenfield. 2004. Readability formulas for EFL. Japan Association for Language Teaching, 26(1):5- 24.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "An analysis of statistical models and features for reading difficulty prediction", "authors": [ { "first": "M", "middle": [], "last": "Heilman", "suffix": "" }, { "first": "K", "middle": [], "last": "Collins-Thompson", "suffix": "" }, { "first": "M", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Heilman, K. Collins-Thompson, and M. Eskenazi. 2008. An analysis of statistical models and features for reading difficulty prediction. In Proceedings of the Third Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 1-8.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Comment mesurer la lisibilit\u00e9", "authors": [ { "first": "G", "middle": [], "last": "Henry", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Henry. 1975. Comment mesurer la lisibilit\u00e9. Labor, Bruxelles.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Application de l'indice de Flesch\u00e0 la langue fran\u00e7aise. Cahiers\u00c9tudes de Radio-T\u00e9l\u00e9vision", "authors": [ { "first": "L", "middle": [], "last": "Kandel", "suffix": "" }, { "first": "A", "middle": [], "last": "Moles", "suffix": "" } ], "year": 1958, "venue": "", "volume": "19", "issue": "", "pages": "253--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Kandel and A. Moles. 1958. Application de l'indice de Flesch\u00e0 la langue fran\u00e7aise. Cahiers\u00c9tudes de Radio-T\u00e9l\u00e9vision, 19:253-274.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Measuring the inference load of a text", "authors": [ { "first": "S", "middle": [], "last": "Kemper", "suffix": "" } ], "year": 1983, "venue": "Journal of Educational Psychology", "volume": "75", "issue": "3", "pages": "391--401", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kemper. 1983. Measuring the inference load of a text. Journal of Educational Psychology, 75(3):391-401.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Reading comprehension and readability in educational practice and psychological theory", "authors": [ { "first": "W", "middle": [], "last": "Kintsch", "suffix": "" }, { "first": "D", "middle": [], "last": "Vipond", "suffix": "" } ], "year": 1979, "venue": "Perspectives on Memory Research", "volume": "", "issue": "", "pages": "329--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Kintsch and D. Vipond. 1979. Reading comprehen- sion and readability in educational practice and psy- chological theory. In L.G. Nilsson, editor, Perspec- tives on Memory Research, pages 329-365. Lawrence Erlbaum, Hillsdale, NJ.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Comprehension and recall of text as a function of content variables1", "authors": [ { "first": "W", "middle": [], "last": "Kintsch", "suffix": "" }, { "first": "E", "middle": [], "last": "Kozminsky", "suffix": "" }, { "first": "W", "middle": [ "J" ], "last": "Streby", "suffix": "" }, { "first": "G", "middle": [], "last": "Mckoon", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Keenan", "suffix": "" } ], "year": 1975, "venue": "Journal of Verbal Learning and Verbal Behavior", "volume": "14", "issue": "2", "pages": "196--214", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Kintsch, E. Kozminsky, W.J. Streby, G. McKoon, and J.M. Keenan. 1975. Comprehension and recall of text as a function of content variables1. Journal of Verbal Learning and Verbal Behavior, 14(2):196-214.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Densid\u00e9es: calcul automatique de la densit\u00e9 des id\u00e9es dans un corpus oral", "authors": [ { "first": "H", "middle": [], "last": "Lee", "suffix": "" }, { "first": "P", "middle": [], "last": "Gambette", "suffix": "" }, { "first": "E", "middle": [], "last": "Maill\u00e9", "suffix": "" }, { "first": "C", "middle": [], "last": "Thuillier", "suffix": "" } ], "year": 2010, "venue": "Actes de la douxime Rencontre des tudiants Chercheurs en Informatique pour le Traitement Automatique des langues (RECITAL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Lee, P. Gambette, E. Maill\u00e9, and C. Thuillier. 2010. Densid\u00e9es: calcul automatique de la densit\u00e9 des id\u00e9es dans un corpus oral. In Actes de la douxime Rencon- tre des tudiants Chercheurs en Informatique pour le Traitement Automatique des langues (RECITAL).", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Predicting readability. the Teachers College Record", "authors": [ { "first": "I", "middle": [], "last": "Lorge", "suffix": "" } ], "year": 1944, "venue": "", "volume": "45", "issue": "", "pages": "404--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Lorge. 1944. Predicting readability. the Teachers Col- lege Record, 45(6):404-419.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Lisibilit\u00e9 des textes pour enfants: un nouvel outil? Communication et Langages", "authors": [ { "first": "J", "middle": [], "last": "Mesnager", "suffix": "" } ], "year": 1989, "venue": "", "volume": "79", "issue": "", "pages": "18--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Mesnager. 1989. Lisibilit\u00e9 des textes pour enfants: un nouvel outil? Communication et Langages, 79:18-38.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Reading research and the composition teacher: The importance of plans. College composition and communication", "authors": [ { "first": "B", "middle": [ "J F" ], "last": "Meyer", "suffix": "" } ], "year": 1982, "venue": "", "volume": "33", "issue": "", "pages": "37--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "B.J.F. Meyer. 1982. Reading research and the composi- tion teacher: The importance of plans. College com- position and communication, 33(1):37-49.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Quantitative analysis of culture using millions of digitized books", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Michel", "suffix": "" }, { "first": "Y", "middle": [ "K" ], "last": "Shen", "suffix": "" }, { "first": "A", "middle": [ "P" ], "last": "Aiden", "suffix": "" }, { "first": "A", "middle": [], "last": "Veres", "suffix": "" }, { "first": "M", "middle": [ "K" ], "last": "Gray", "suffix": "" }, { "first": "; J", "middle": [ "P" ], "last": "Pickett", "suffix": "" }, { "first": "D", "middle": [], "last": "Hoiberg", "suffix": "" }, { "first": "D", "middle": [], "last": "Clancy", "suffix": "" }, { "first": "P", "middle": [], "last": "Norvig", "suffix": "" }, { "first": "J", "middle": [], "last": "Orwant", "suffix": "" }, { "first": "S", "middle": [], "last": "Pinker", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Nowak", "suffix": "" }, { "first": "E", "middle": [ "L" ], "last": "Aiden", "suffix": "" } ], "year": 2011, "venue": "Science", "volume": "331", "issue": "6014", "pages": "176--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.B. Michel, Y.K. Shen, A.P. Aiden, A. Veres, M.K. Gray, The Google Books Team, J.P. Pickett, D. Hoiberg, D. Clancy, P. Norvig, J. Orwant, S. Pinker, M.A. Nowak, and E.L. Aiden. 2011. Quantitative analysis of culture using millions of digitized books. Science, 331(6014):176-182.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Readability and recall of short prose passages: A theoretical analysis", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Miller", "suffix": "" }, { "first": "W", "middle": [], "last": "Kintsch", "suffix": "" } ], "year": 1980, "venue": "Journal of Experimental Psychology: Human Learning and Memory", "volume": "6", "issue": "4", "pages": "335--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Miller and W. Kintsch. 1980. Readability and re- call of short prose passages: A theoretical analysis. Journal of Experimental Psychology: Human Learn- ing and Memory, 6(4):335-354.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "The use of film subtitles to estimate word frequencies", "authors": [ { "first": "B", "middle": [], "last": "New", "suffix": "" }, { "first": "M", "middle": [], "last": "Brysbaert", "suffix": "" }, { "first": "J", "middle": [], "last": "Veronis", "suffix": "" }, { "first": "C", "middle": [], "last": "Pallier", "suffix": "" } ], "year": 2007, "venue": "Applied Psycholinguistics", "volume": "28", "issue": "04", "pages": "661--677", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. New, M. Brysbaert, J. Veronis, and C. Pallier. 2007. The use of film subtitles to estimate word frequencies. Applied Psycholinguistics, 28(04):661-677.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty", "authors": [ { "first": "R", "middle": [ "E" ], "last": "O'connor", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Bell", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Harty", "suffix": "" }, { "first": "L", "middle": [ "K" ], "last": "Larkin", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Sackor", "suffix": "" }, { "first": "N", "middle": [], "last": "Zigmond", "suffix": "" } ], "year": 2002, "venue": "Journal of Educational Psychology", "volume": "94", "issue": "3", "pages": "474--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.E. O'Connor, K.M. Bell, K.R. Harty, L.K. Larkin, S.M. Sackor, and N. Zigmond. 2002. Teaching reading to poor readers in the intermediate grades: A comparison of text difficulty. Journal of Educational Psychology, 94(3):474-485.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Measuring readability for Japanese learners of English", "authors": [ { "first": "T", "middle": [], "last": "Ozasa", "suffix": "" }, { "first": "G", "middle": [], "last": "Weir", "suffix": "" }, { "first": "M", "middle": [], "last": "Fukui", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th Conference of Pan-Pacific Association of Applied Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Ozasa, G. Weir, and M. Fukui. 2007. Measuring read- ability for Japanese learners of English. In Proceed- ings of the 12th Conference of Pan-Pacific Association of Applied Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Revisiting readability: A unified framework for predicting text quality", "authors": [ { "first": "E", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "A", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "186--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Pitler and A. Nenkova. 2008. Revisiting readabil- ity: A unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 186-195.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The place of readability formulas in technical communication", "authors": [ { "first": "J", "middle": [ "C" ], "last": "Redish", "suffix": "" }, { "first": "J", "middle": [], "last": "Selzer", "suffix": "" } ], "year": 1985, "venue": "", "volume": "32", "issue": "", "pages": "46--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.C. Redish and J. Selzer. 1985. The place of readability formulas in technical communication. Technical com- munication, 32(4):46-52.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Une nouvelle formule de lisibilit\u00e9", "authors": [ { "first": "F", "middle": [], "last": "Richaudeau", "suffix": "" } ], "year": 1979, "venue": "Communication et Langages", "volume": "44", "issue": "", "pages": "5--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Richaudeau. 1979. Une nouvelle formule de lisibilit\u00e9. Communication et Langages, 44:5-26.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Probabilistic part-of-speech tagging using decision trees", "authors": [ { "first": "H", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of International Conference on New Methods in Language Processing", "volume": "12", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, volume 12. Manchester, UK.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Reading level assessment using support vector machines and statistical language models", "authors": [ { "first": "E", "middle": [], "last": "Schwarm", "suffix": "" }, { "first": "M", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Schwarm and M. Ostendorf. 2005. Reading level assessment using support vector machines and statis- tical language models. Proceedings of the 43rd An- nual Meeting on Association for Computational Lin- guistics, pages 523-530.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "A statistical model for scientific readability", "authors": [ { "first": "L", "middle": [], "last": "Si", "suffix": "" }, { "first": "J", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Tenth International Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "574--576", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Si and J. Callan. 2001. A statistical model for sci- entific readability. In Proceedings of the Tenth Inter- national Conference on Information and Knowledge Management, pages 574-576. ACM New York, NY, USA.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Devereaux readability index", "authors": [ { "first": "A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 1961, "venue": "The Journal of Educational Research", "volume": "54", "issue": "8", "pages": "289--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Smith. 1961. Devereaux readability index. The Journal of Educational Research, 54(8):289-303.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Measuring reading comprehension with the lexile framework", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Stenner", "suffix": "" } ], "year": 1996, "venue": "Fourth North American Conference on Adolescent/Adult Literacy", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.J. Stenner. 1996. Measuring reading comprehension with the lexile framework. In Fourth North American Conference on Adolescent/Adult Literacy.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "The Measurement of Vocabulary Difficulty", "authors": [ { "first": "J", "middle": [ "B" ], "last": "Tharp", "suffix": "" } ], "year": 1939, "venue": "Modern Language Journal", "volume": "", "issue": "", "pages": "169--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.B. Tharp. 1939. The Measurement of Vocabulary Dif- ficulty. Modern Language Journal, pages 169-178.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Data mining et statistique d\u00e9cisionnelle l'intelligence des donn\u00e9es", "authors": [ { "first": "S", "middle": [], "last": "Tuff\u00e9ry", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Tuff\u00e9ry. 2007. Data mining et statistique d\u00e9cisionnelle l'intelligence des donn\u00e9es.\u00c9d. Technip, Paris.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Readability of French as a foreign language and its uses", "authors": [ { "first": "S", "middle": [], "last": "Uitdenbogerd", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Australian Document Computing Symposium", "volume": "", "issue": "", "pages": "19--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Uitdenbogerd. 2005. Readability of French as a for- eign language and its uses. In Proceedings of the Aus- tralian Document Computing Symposium, pages 19- 25.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "A posteriori agreement as a quality measure for readability prediction systems", "authors": [ { "first": "P", "middle": [], "last": "Van Oosten", "suffix": "" }, { "first": "V", "middle": [], "last": "Hoste", "suffix": "" }, { "first": "D", "middle": [], "last": "Tanghe", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "6609", "issue": "", "pages": "424--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. van Oosten, V. Hoste, and D. Tanghe. 2011. A pos- teriori agreement as a quality measure for readabil- ity prediction systems. In A. Gelbukh, editor, Com- putational Linguistics and Intelligent Text Processing, volume 6609 of Lecture Notes in Computer Science, pages 424-435. Springer, Berlin / Heidelberg.", "links": null } }, "ref_entries": { "TABREF0": { "text": ".561) 380(75.779) 552(176.973) 198(71.701) 184(92.327) 108(35.202) 1, 852(510; 543)", "content": "
A1A2B1B2C1C2Total
430(58
", "type_str": "table", "html": null, "num": null }, "TABREF1": { "text": "Distribution of the number of texts and tokens per level in our corpus.", "content": "", "type_str": "table", "html": null, "num": null }, "TABREF5": { "text": "", "content": "
: Accuracy and adjacent accuracy (in percentage)
for models either using only one family of predictors, or
including all 46 features except those of one family.
", "type_str": "table", "html": null, "num": null } } } }