{ "paper_id": "C14-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:23:45.743776Z" }, "title": "Simple or Complex? Assessing the readability of Basque Texts", "authors": [ { "first": "Itziar", "middle": [], "last": "Gonzalez-Dios", "suffix": "", "affiliation": { "laboratory": "IXA NLP Group University of the Basque Country (UPV/EHU", "institution": "", "location": {} }, "email": "itziar.gonzalezd@ehu.es" }, { "first": "Jes\u00fas", "middle": [], "last": "Aranzabe", "suffix": "", "affiliation": { "laboratory": "IXA NLP Group University of the Basque Country (UPV/EHU", "institution": "", "location": {} }, "email": "" }, { "first": "Arantza", "middle": [], "last": "D\u00edaz De Ilarraza", "suffix": "", "affiliation": { "laboratory": "IXA NLP Group University of the Basque Country (UPV/EHU", "institution": "", "location": {} }, "email": "" }, { "first": "Haritz", "middle": [], "last": "Salaberri", "suffix": "", "affiliation": { "laboratory": "IXA NLP Group University of the Basque Country (UPV/EHU", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present a readability assessment system for Basque, ErreXail, which is going to be the preprocessing module of a Text Simplification system. To that end we compile two corpora, one of simple texts and another one of complex texts. To analyse those texts, we implement global, lexical, morphological, morpho-syntactic, syntactic and pragmatic features based on other languages and specially considered for Basque. We combine these feature types and we train our classifiers. After testing the classifiers, we detect the features that perform best and the most predictive ones.", "pdf_parse": { "paper_id": "C14-1033", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present a readability assessment system for Basque, ErreXail, which is going to be the preprocessing module of a Text Simplification system. To that end we compile two corpora, one of simple texts and another one of complex texts. To analyse those texts, we implement global, lexical, morphological, morpho-syntactic, syntactic and pragmatic features based on other languages and specially considered for Basque. We combine these feature types and we train our classifiers. After testing the classifiers, we detect the features that perform best and the most predictive ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Readability assessment is a research line that aims to grade the difficulty or the ease of the texts. It has been a remarkable question in the educational domain during the last century and is of great importance in Natural Language Processing (NLP) during the last decade. Classical readability formulae like Flesh formula (Flesch, 1948) , Dale-Chall formula (Chall and Dale, 1995) and The Gunning FOG index (Gunning, 1968 ) take into account raw and lexical features and frequency counts. NLP techniques, on the other hand, make possible the consideration of more complex features.", "cite_spans": [ { "start": 324, "end": 338, "text": "(Flesch, 1948)", "ref_id": "BIBREF23" }, { "start": 360, "end": 382, "text": "(Chall and Dale, 1995)", "ref_id": "BIBREF14" }, { "start": 391, "end": 423, "text": "Gunning FOG index (Gunning, 1968", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent research in NLP (Si and Callan, 2001; Petersen and Ostendorf, 2009; Feng, 2009) has demonstrated that classical readability formulae are unreliable. Moreover, those metrics are language specific.", "cite_spans": [ { "start": 23, "end": 44, "text": "(Si and Callan, 2001;", "ref_id": "BIBREF41" }, { "start": 45, "end": 74, "text": "Petersen and Ostendorf, 2009;", "ref_id": "BIBREF34" }, { "start": 75, "end": 86, "text": "Feng, 2009)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Readability assessment is also used as a preprocess or evaluation in Text Simplification (TS) systems e.g. for English (Feng et al., 2010) , Portuguese , Italian (Dell'Orletta et al., 2011) , German (Hancke et al., 2012) and Spanish (\u0160tajner and Saggion, 2013) . Given a text the aim of these systems is to decide whether a text is complex or not. So, in case of being difficult, the given text should be simplified.", "cite_spans": [ { "start": 119, "end": 138, "text": "(Feng et al., 2010)", "ref_id": "BIBREF21" }, { "start": 162, "end": 189, "text": "(Dell'Orletta et al., 2011)", "ref_id": "BIBREF18" }, { "start": 199, "end": 220, "text": "(Hancke et al., 2012)", "ref_id": "BIBREF31" }, { "start": 225, "end": 260, "text": "Spanish (\u0160tajner and Saggion, 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As far as we know no specific metric has been used to calculate the complexity of Basque texts. The only exception we find is a system for the auto-evaluation of essays Idazlanen Autoebaluaziorako Sistema (IAS) (Aldabe et al., 2012) which includes metrics similar to those used in readability assessment. IAS analyses Basque texts after several criteria focused on educational correction such as the clause number in a sentence, types of sentences, word types and lemma number among others. It was foreseen to use this tool in the Basque TS system (Aranzabe et al., 2012) . The present work means to add to IAS the capacity of evaluating the complexity of texts by means of new linguistic features and criteria.", "cite_spans": [ { "start": 531, "end": 571, "text": "Basque TS system (Aranzabe et al., 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we present ErreXail, a readability assessment system for Basque, a Pre-Indo-European agglutinative head-final pro-drop language, which displays a rich inflectional morphology and whose orthography is phonemic. ErreXail classifies the texts and decides if they should be simplified or not. This work has two objectives: to build a classifier which will be the preprocess of the TS system and to know which are the most predictive features that differ in complex and simple texts. The study of the most predictive features will help in the linguistic analysis of the complex structures of Basque as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organised as follows: In section 2 we offer an overview about this topic. We present the corpora we gathered and its processing in section 3. In section 4 we summarise the linguistic features we implemented and we present the experiments and their results in section 5. The present system, ErreXail, is described in section 6 and in section 7 we compare our work with other studies. Finally, we conclude and outline the future work (section 8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the last years new methods have been proposed to assess the readability in NLP. For English, Si and Callan (2001) use statistical models, exactly unigram language models, combined with traditional readability features like sentence length and number of syllables per word. Coh-Metrix (Graesser et al., 2004) is a tool that analyses multiple characteristics and levels of language-discourse such us narrativity, word concreteness or noun overlap. In the 3.0 version 1 108 indices are available. Pitler and Nenkova (2008) use lexical, syntactic, and discourse features emphasising the importance of discourse features as well. Schwarm and Ostendorf (2005) combine features from statistical language models, parse features, and other traditional features using support vector machines.", "cite_spans": [ { "start": 96, "end": 116, "text": "Si and Callan (2001)", "ref_id": "BIBREF41" }, { "start": 287, "end": 310, "text": "(Graesser et al., 2004)", "ref_id": "BIBREF28" }, { "start": 497, "end": 522, "text": "Pitler and Nenkova (2008)", "ref_id": "BIBREF35" }, { "start": 628, "end": 656, "text": "Schwarm and Ostendorf (2005)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "It is very interesting to take a look at readability systems for other languages as well. Some readability metrics take them into account special characteristics linked to languages. For example, in Chinese the number of strokes is considered (Pang, 2006) , in Japanese the different characters (Sato et al., 2008) , in German the word formation (vor der Br\u00fcck et al., 2008) , in French the pass\u00e9 simple (Fran\u00e7ois and Fairon, 2012) and the orthographic neighbourhood (Gala et al., 2013) and in Swedish vocabulary resources (Sj\u00f6holm, 2012; Falkenjack et al., 2013) among many other features. For Portuguese, Coh-metrix has been adapted and in Arabic language-specific formulae have been used (Al-Ajlan et al., 2008; Daud et al., 2013) . Looking at free word order, head final and rich morphology languages, Sinha et al. (2012) propose two new measures for Hindi and for Bangla based on English formulae. Other systems use only machine learning techniques, e.g. for Chinese (Chen et al., 2011) .", "cite_spans": [ { "start": 243, "end": 255, "text": "(Pang, 2006)", "ref_id": "BIBREF33" }, { "start": 295, "end": 314, "text": "(Sato et al., 2008)", "ref_id": "BIBREF38" }, { "start": 346, "end": 374, "text": "(vor der Br\u00fcck et al., 2008)", "ref_id": "BIBREF44" }, { "start": 404, "end": 431, "text": "(Fran\u00e7ois and Fairon, 2012)", "ref_id": "BIBREF24" }, { "start": 467, "end": 486, "text": "(Gala et al., 2013)", "ref_id": "BIBREF25" }, { "start": 523, "end": 538, "text": "(Sj\u00f6holm, 2012;", "ref_id": "BIBREF43" }, { "start": 539, "end": 563, "text": "Falkenjack et al., 2013)", "ref_id": "BIBREF20" }, { "start": 691, "end": 714, "text": "(Al-Ajlan et al., 2008;", "ref_id": "BIBREF4" }, { "start": 715, "end": 733, "text": "Daud et al., 2013)", "ref_id": "BIBREF17" }, { "start": 806, "end": 825, "text": "Sinha et al. (2012)", "ref_id": "BIBREF42" }, { "start": 972, "end": 991, "text": "(Chen et al., 2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The systems whose motivation is Text Simplification analyse linguistic features of the text and then they use machine learning techniques to build the classifiers. These systems have been created for English (Feng et al., 2010) , Portuguese , Italian (Dell'Orletta et al., 2011) and German (Hancke et al., 2012) . We follow the similar methodology for Basque since we share the same aim.", "cite_spans": [ { "start": 208, "end": 227, "text": "(Feng et al., 2010)", "ref_id": "BIBREF21" }, { "start": 251, "end": 278, "text": "(Dell'Orletta et al., 2011)", "ref_id": "BIBREF18" }, { "start": 290, "end": 311, "text": "(Hancke et al., 2012)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Readability assessment can be focused on different domains such as legal, medical, education and so on. Interesting points about readability are presented in DuBay (2004) and an analysis of the methods and a review of the systems is presented in Benjamin (2012) and Zamanian and Heydari (2012) .", "cite_spans": [ { "start": 158, "end": 170, "text": "DuBay (2004)", "ref_id": null }, { "start": 246, "end": 261, "text": "Benjamin (2012)", "ref_id": "BIBREF12" }, { "start": 266, "end": 293, "text": "Zamanian and Heydari (2012)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Being our aim to build a model to distinguish simple and complex texts and to know which are the most predictive features based on NLP techniques, we needed to collect the corpora. We gathered texts from the web and compiled two corpora. The first corpus, henceforth T-comp, is composed by 200 texts (100 articles and 100 analysis) from the Elhuyar aldizkaria 2 , a monthly journal about science and technology in Basque. T-comp is meant to be the complex corpus. The second corpus, henceforth T-simp, is composed by 200 texts from ZerNola 3 , a website to popularise science among children up to 12 years and the texts we collected are articles. To find texts specially written for children was really challenging. Main statistics about both corpora are presented in Table 1 1. Morpho-syntactic analysis by Morpheus (Alegria et al., 2002) 2. Lemmatisation and syntactic function identification by Eustagger (Aduriz et al., 2003) 3. Multi-words item identification (Alegria et al., 2004a) 4. Named entities recognition and classification by Eihera (Alegria et al., 2004b) 5. Shallow parsing by Ixati (Aduriz et al., 2004) 6. Sentence and clause boundaries determination by MuGak (Aranzabe et al., 2013) 7. Apposition identification (Gonzalez-Dios et al., 2013) This preprocess is necessary to perform the analysis of the features presented in section 4.", "cite_spans": [ { "start": 808, "end": 839, "text": "Morpheus (Alegria et al., 2002)", "ref_id": null }, { "start": 908, "end": 929, "text": "(Aduriz et al., 2003)", "ref_id": "BIBREF1" }, { "start": 965, "end": 988, "text": "(Alegria et al., 2004a)", "ref_id": "BIBREF7" }, { "start": 1041, "end": 1071, "text": "Eihera (Alegria et al., 2004b)", "ref_id": null }, { "start": 1100, "end": 1121, "text": "(Aduriz et al., 2004)", "ref_id": "BIBREF2" }, { "start": 1173, "end": 1202, "text": "MuGak (Aranzabe et al., 2013)", "ref_id": null }, { "start": 1232, "end": 1260, "text": "(Gonzalez-Dios et al., 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 768, "end": 775, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpora", "sec_num": "3" }, { "text": "In this section we summarise the linguistic features implemented to analyse the complexity of the texts. We distinguish different groups of features: global, lexical, morphological, morpho-syntactic, syntactic and pragmatic features. There are in total 94 features. Most of the features we present have already been included in systems for other languages but others have been specially considered for Basque.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic features", "sec_num": "4" }, { "text": "Global features take into account the document as whole and serve to give an overview of the texts. They are presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Global features", "sec_num": "4.1" }, { "text": "Average of letters per word These features are based on classical readability formulae and in the criteria taken on the simplification study (Gonzalez-Dios, 2011), namely the sentence length and the clause number per sentence. They are also included in IAS (Aldabe et al., 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Average of words per sentence Average of clauses per sentence", "sec_num": null }, { "text": "Lexical features are based on lemmas. We calculate the ratios of all the POS tags and different kinds of abbreviations and symbols. We concentrate on particular types of substantives and verbs as well. Part of theses ratios are shown in Table 3 . In total there are 39 ratios in this group.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 244, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Lexical features", "sec_num": "4.2" }, { "text": "Unique lemmas / all the lemmas Each POS / all the words Proper Nouns / all the nouns Named entities / all the nouns Verbal nouns / all the verbs Modal verbs / all the verbs Causative verbs / all the verbs Intransitive verbs with one arg. (Nor verbs) / all the verbs Intransitive verbs with two arg. (Nor-Nori verbs) / all the verbs Transitive verbs with two arg. (Nor-Nork verbs) / all the verbs Transitive verbs with three arg. (Nor-Nori-Nork) verbs / all the verbs Acronyms / all the words Abbreviations / all the words Symbols / all the words Among those features, we want to point out the causative verbs and the intransitive or transitive verbs with one, two or three arguments (arg.) as features related to Basque. Causative verbs are verbs with the suffix -arazi and they are usually translated as \"to make someone + verb\", e.g. edanarazi, that stands for \"to make someone drink\". Other factitive verbs are translated without using that paraphrase like jakinarazi that means \"to notify\", lit. \"to make know\". The transitivity classification is due to the fact that Basque verb agrees with three grammatical cases (ergative Nork, absolutive Nor and dative Nori) and therefore verbs are grouped according to the arguments they take in Basque grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratios", "sec_num": null }, { "text": "Morphological features analyse the different ways lemmas can be realised. These features are summarised in Table 4 and there are 24 ratios in total.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 114, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Morphological features", "sec_num": "4.3" }, { "text": "Each verb aspect / all the verbs Each verb tense / all the verbs Each verb mood / all the verbs Words with ellipsis / all the words Each type of words with ellipsis / all the words with ellipsis Basque has 18 case endings (absolutive, ergative, inessive, allative, genitive...), that is, 18 different endings can be attached to the end of the noun phrases. For example, if we attach the inessive -n to the noun phrase etxea \"the house\", we get etxean \"at home\". The verb features considered the forms obtained with the inflection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Each case ending / all the case endings", "sec_num": null }, { "text": "Verb morphology is very rich in Basque as well. The aspect is attached to the part of the verb which contains the lexical information. There are 4 aspects: puntual (aoristic), perfective, imperfective and future aspect. Verb tenses are usually marked in the auxiliary verb and there are four tenses: present, past, irreal and archaic future 4 . The verbal moods are indicative, subjunctive, imperative and potential. The latter is used to express permissibility or possible circumstances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Each case ending / all the case endings", "sec_num": null }, { "text": "Due to the typology of Basque, ellipsis 5 is a normal phenomenon and ellipsis can be even found within a word (verbs, nouns, adjective...); for instance, dioguna which means \"what we say\". This kind of ellipsis occurs e.g. in English, Spanish, French and German as well but in these languages it is realised as a sentence; but it is expressed only by a word in Basque.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Each case ending / all the case endings", "sec_num": null }, { "text": "Morpho-syntactic features are based on the shallow parsing (chunks 6 ) and in the apposition detection (appositions). These features are presented in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Morpho-syntactic features", "sec_num": "4.4" }, { "text": "Noun phrases (chunks) / all the phrases Noun phrases (chunks) / all the sentences Verb phrases / all the phrases Appositions / all the phrases Appositions / all the noun phrases (chunks) Contrary to the features so far presented, the morpho-syntactic features take into account mainly more than a word. About apposition, there are 2 types in Basque (Gonzalez-Dios et al., 2013) but we consider all the instances together in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ratios", "sec_num": null }, { "text": "Syntactic features consider average of the subordinate clauses and types of subordinate clauses. They are outlined in Table 6 and there are 10 ratios in total. The types of adverbial clauses are temporal, causal, conditional, modal, concessive, consecutive and modal-temporal. The latter is a clause type which expresses manner and simultaneity of the action in reference to the main clause. In this first approach we decided not to use dependency based features like dependency depth or distance from dependent to head because dependency parsing is time consuming and slows down the preprocessing. Moreover, the importance of syntax is under discussion: Petersen and Ostendorf (2009) find that syntax does not have too much influence while Sj\u00f6holm (2012) shows that dependencies are not necessary. Pitler and Nenkova (2008) pointed out the importance of syntax. but Dell'Orletta et al. 2011demonstrate that for document classification reliable results can be found without syntax. Anyway, syntax is necessary for sentence classification.", "cite_spans": [ { "start": 655, "end": 684, "text": "Petersen and Ostendorf (2009)", "ref_id": "BIBREF34" }, { "start": 741, "end": 755, "text": "Sj\u00f6holm (2012)", "ref_id": "BIBREF43" }, { "start": 799, "end": 824, "text": "Pitler and Nenkova (2008)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Syntactic features", "sec_num": "4.5" }, { "text": "In our cases, the pragmatic features we examine are the cohesive devices. These features are summed up in Table 7 . There are 12 ratios in total. ", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 113, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Pragmatic features", "sec_num": "4.6" }, { "text": "We performed two experiments, the first one to build a classifier and the second one to know which are the most predictive features. For both tasks we used the WEKA tool (Hall et al., 2009) .", "cite_spans": [ { "start": 170, "end": 189, "text": "(Hall et al., 2009)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "In the first experiment we ran 5 classifiers and evaluated their performance. Those classifiers were Random Forest (Breiman, 2001) , the J48 decision tree (Quinlan, 1993) , K-Nearest Neighbour, IBk (Aha et al., 1991) , Na\u00efve Bayes (John and Langley, 1995) and Support Vector Machine with SMO algorithm (Platt, 1998) . We used 10 fold cross-validation, similar to what has been done in other studies.", "cite_spans": [ { "start": 115, "end": 130, "text": "(Breiman, 2001)", "ref_id": "BIBREF13" }, { "start": 155, "end": 170, "text": "(Quinlan, 1993)", "ref_id": "BIBREF37" }, { "start": 198, "end": 216, "text": "(Aha et al., 1991)", "ref_id": "BIBREF3" }, { "start": 231, "end": 255, "text": "(John and Langley, 1995)", "ref_id": "BIBREF32" }, { "start": 302, "end": 315, "text": "(Platt, 1998)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Taking into account all the features presented in section 4, the best results were obtained using SMO. This way, 89.50 % of the instances were correctly classified. The F -measure for complex text was 0.899 %, for simple texts was 0.891 % and the MAE was 0.105 %. The results using all the features are shown in Table 8 . We classified each feature type on their own as well and the best results were obtained using only lexical features, 90.75 %. The classification results according to their feature group are presented in Table 9 . We only present the classifiers with the best results and these are remarked in bold. Table 9 : Classification results of each feature type", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 319, "text": "Table 8", "ref_id": "TABREF10" }, { "start": 525, "end": 532, "text": "Table 9", "ref_id": null }, { "start": 621, "end": 628, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We also made different combinations of feature types and the accuracy was improved. The best combination group was the one formed by lexical, morphological, morpho-syntactic and syntactic features and they obtain 93.50 % with SMO. Best results are show in Table 10 . Combining the feature types, SMO is the best classifier in most of the cases but Random Forest outperforms the results when there are no lexical features.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "In the second experiment, we analysed which were the most predictive linguistic features in each group. We used Weka's Information Gain (InfoGain AttributeEval) to create the ranking and we ran it for each feature group. In Table 11 we present the 10 most predictive features taking all the features groups into account.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 232, "text": "Table 11", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The results of this experiment are interesting for the linguistic studies on Text Simplification. It shows us indeed which phenomena we should work on next. In these experiment we notice as well the relevance of the lexical features and that syntactic features are not so decisive in document classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The features with relevance 0 have been analysed as well. Some of them are e.g. the ratio of the inessive among all the case endings, the ratio of the indicative mood among all the verbal moods, the ratio of the adjectives among all the words and the ratio of the ratio of the present tense among all the verbal tenses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We also performed a classification experiment with the top 10 features and J48 is the best classifier (its best performance as well). These results are presented in Table 12 .", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Table 12", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "To sum up, our best results are obtained using a combination of features (Lex+Morph+Morph-sint+Sintax). We want to remark the importance of lexical features as well, since they alone outperform all the features and 5 of them are among the top ten features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "The readability system for Basque ErreXail has a three-stage architecture (Figure 1) .", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 84, "text": "(Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "So, given a Basque written text, we follow next steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "1. The linguistic analysis will be carried out, that is, morpho-syntactic tagging, lemmatisation, syntactic function identification, named entity recognition, shallow parsing, sentence and clause boundaries determination and apposition identification will be performed. We will use the tools presented in section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "Feature and group Relevance Proper nouns / common nouns ratio (Lex.) 0.2744 Appositions / noun phrases ratio (Morpho-synt.) 0.2529 Appositions / all phrases ratio (Morpho-synt.) 0.2529 Named entities / common nouns ratio (Lex.) 0.2436 Unique lemmas / all the lemmas ratio (Lex.) 0.2394 Acronyms / all the words ratio (Lex.) 0.2376 Causative verbs / all the verbs ratio (Lex.) 0.2099 Modal-temporal clauses / subordinate clauses ratio (Synt.) 0.2056 Destinative case endings / all the case endings ratio (Morph.) 0.1968 Connectors of clarification / all the connectors ratio (Prag.) 0.1957 Table 12 : Classification results using the top 10 features Figure 1 : The architecture of system 2. Texts will be analysed according to the features and measures presented in section 4.", "cite_spans": [ { "start": 62, "end": 68, "text": "(Lex.)", "ref_id": null }, { "start": 109, "end": 123, "text": "(Morpho-synt.)", "ref_id": null }, { "start": 163, "end": 177, "text": "(Morpho-synt.)", "ref_id": null }, { "start": 221, "end": 227, "text": "(Lex.)", "ref_id": null }, { "start": 434, "end": 441, "text": "(Synt.)", "ref_id": null }, { "start": 503, "end": 511, "text": "(Morph.)", "ref_id": null } ], "ref_spans": [ { "start": 589, "end": 597, "text": "Table 12", "ref_id": "TABREF1" }, { "start": 649, "end": 657, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "3. We will use the SMO Support Vector Machine as classification model, since that was the best classifier in the experiments exposed in section 5. To speed up the process for Text Simplification, we will analyse only the combination of lexical, morphological, morpho-syntactic and syntactic (Lex+Morph+Morph-sint+Sintax) features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "Although the first application of this system will be the preprocessing of texts for the Basque TS system, the system we present in this paper is independent and can be used for any other application. We want to remark that this study, as it is based on other languages, could be applied to any other language as well provided that the text could be analysed similar to us.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System overview", "sec_num": "6" }, { "text": "The task of text classification has been carried out by several studies before. Due to our small corpus we were only able to discriminate between complex and simple texts like Dell'Orletta et al. (2011) and Hancke et al. (2012) , other studies have classified more complexity levels (Schwarm and Ostendorf, 2005; Fran\u00e7ois and Fairon, 2012) . In this section we are going to compare our system with other systems that share our same goal, namely to know which texts should be simplified.", "cite_spans": [ { "start": 176, "end": 202, "text": "Dell'Orletta et al. (2011)", "ref_id": "BIBREF18" }, { "start": 207, "end": 227, "text": "Hancke et al. (2012)", "ref_id": "BIBREF31" }, { "start": 283, "end": 312, "text": "(Schwarm and Ostendorf, 2005;", "ref_id": "BIBREF40" }, { "start": 313, "end": 339, "text": "Fran\u00e7ois and Fairon, 2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Comparing our experiment with studies that classify two grades and use SMO, Hancke et al. (2012) obtain an accuracy of 89.7 % with a 10 fold cross-validation. These results are very close to ours, although their data compiles 4603 documents and ours 400. According to the feature type, their best type is the morphological, obtaining 85.4 % of accuracy. Combining lexical, language model and morphological features they obtain 89.4 % of accuracy. To analyse their 10 most predictive features, they use Information Gain as well but we do not share any feature in common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Dell'Orletta et al. (2011) perform three different experiments but only their first experiment is similar to our work. For that classification experiment they use 638 documents and follow a 5 fold crossvalidation process of the Euclidian distance between vectors. Taking into account all the features the accuracy of their system is 97.02 %. However, their best performance is 98.12 % when they only use the combination of raw, lexical and morpho-syntactic features. assess the readability of the texts according to three levels: rudimentary, basic and advanced. In total they compile 592 texts. Using SMO, 10 fold cross-validation and standard classification, they obtain 0.276 MAE taking into account all the features. The F -measure for original texts is 0.913, for natural simplification 0.483 and for strong simplification 0.732. They experiment with feature types as well but they obtain their best results using all the features. Among their highly correlated features they present the incidence of apposition in second place as we do here. We do not have any other feature in common.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Among other readability assessment whose motivation is TS, Feng et al. (2010) use LIBSVM (Chang and Lin, 2001) and Logistic Regression from WEKA and 10 fold cross-validation. They assess the readability of grade texts and obtain as best results 59.63 % with LIBSVM and 57.59 % with Logistic Regression. Since they assess different grades and use other classifiers it is impossible to compare with our results but we find that we share predictive features. They found out that named entity density and and nouns have predictive power as well.", "cite_spans": [ { "start": 59, "end": 77, "text": "Feng et al. (2010)", "ref_id": "BIBREF21" }, { "start": 89, "end": 110, "text": "(Chang and Lin, 2001)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In this paper we have presented the first readability assessment system for the Basque language. We have implemented 94 ratios based on linguistic features similar to those used in other languages and specially defined for Basque and we have built a classifier which is able to discriminate between difficult and easy texts. We have also determined which are the most predictive features. From our experiments we conclude that using only lexical features or a combination of features types we obtain better results than using all the features. Moreover, we deduce that we do not need to use time consuming resources like dependency parsing or big corpora to obtain good results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and perspectives", "sec_num": "8" }, { "text": "For the future, we could implement new features like word formation or word ordering both based in other languages and in neurolinguistic studies that are being carried out for Basque. Other machine learning techniques can be used, e.g. language models and in the case of getting a bigger corpora or a graded one, we could even try to differentiate more reading levels. We also envisage readability assessment at sentence level in near future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and perspectives", "sec_num": "8" }, { "text": "http://cohmetrix.memphis.edu/cohmetrixpr/cohmetrix3.html (accessed January, 2014) 2 http://aldizkaria.elhuyar.org/ (accessed January, 2014) 3 http://www.zernola.net/ (accessed January, 2014)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The archaic future we also take into account is not used anymore, but it can be found in old texts. Nowadays, the aspect is used to express actions in the future.5 Basque is a pro-drop language and it is very normal to omit the subject, the object and the indirect object because they are marked in the verb. We do not treat this kind of ellipsis in the present work.6 Chunks are a continuum of elements with a head and syntactic sense that do not overlap(Abney, 1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Itziar Gonzalez-Dios's work is funded by a PhD grant from the Basque Government. We thank Lorea Arakistain and I\u00f1aki San Vicente from Elhuyar Fundazioa for providing the corpora. We also want to thank Olatz Arregi for her comments. This research was supported by the the Basque Government (IT344-10), and the Spanish Ministry of Science and Innovation, Hibrido Sint project (MICINN, TIN2010-202181).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Parsing by Chunks", "authors": [ { "first": "P", "middle": [], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1991, "venue": "Principle-Based Parsing: Computation and Psycholinguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven P. Abney. 1991. Parsing by Chunks. In Robert C. Berwick, Steven P. Abney, and Carol Tenny, editors, Principle-Based Parsing: Computation and Psycholinguistics. Kluwer Academic.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Finite State Applications for Basque", "authors": [ { "first": "Itziar", "middle": [], "last": "Aduriz", "suffix": "" }, { "first": "Izaskun", "middle": [], "last": "Aldezabal", "suffix": "" }, { "first": "I\u00f1aki", "middle": [], "last": "Alegria", "suffix": "" }, { "first": "Jose", "middle": [ "Mari" ], "last": "Arriola", "suffix": "" }, { "first": "Arantza", "middle": [], "last": "D\u00edaz De Ilarraza", "suffix": "" }, { "first": "Nerea", "middle": [], "last": "Ezeiza", "suffix": "" }, { "first": "Koldo", "middle": [], "last": "Gojenola", "suffix": "" } ], "year": 2003, "venue": "EACL'2003 Workshop on Finite-State Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Aduriz, Izaskun Aldezabal, I\u00f1aki Alegria, Jose Mari Arriola, Arantza D\u00edaz de Ilarraza, Nerea Ezeiza, and Koldo Gojenola. 2003. Finite State Applications for Basque. In EACL'2003 Workshop on Finite-State Methods in Natural Language Processing., pages 3-11.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A cascaded syntactic analyser for Basque", "authors": [ { "first": "Itziar", "middle": [], "last": "Aduriz", "suffix": "" }, { "first": "Jose", "middle": [ "Mari" ], "last": "Mar\u00eda Jes\u00fas Aranzabe", "suffix": "" }, { "first": "Arantza", "middle": [], "last": "Arriola", "suffix": "" }, { "first": "Koldo", "middle": [], "last": "D\u00edaz De Ilarraza", "suffix": "" }, { "first": "Maite", "middle": [], "last": "Gojenola", "suffix": "" }, { "first": "Larraitz", "middle": [], "last": "Oronoz", "suffix": "" }, { "first": "", "middle": [], "last": "Uria", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "", "issue": "", "pages": "124--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Aduriz, Mar\u00eda Jes\u00fas Aranzabe, Jose Mari Arriola, Arantza D\u00edaz de Ilarraza, Koldo Gojenola, Maite Oronoz, and Larraitz Uria. 2004. A cascaded syntactic analyser for Basque. Computational Linguistics and Intelligent Text Processing, pages 124-134.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Instance-based learning algorithms", "authors": [ { "first": "David", "middle": [ "W" ], "last": "Aha", "suffix": "" }, { "first": "Dennis", "middle": [], "last": "Kibler", "suffix": "" }, { "first": "Marc", "middle": [ "C" ], "last": "Albert", "suffix": "" } ], "year": 1991, "venue": "Machine Learning", "volume": "6", "issue": "", "pages": "37--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "David W. Aha, Dennis Kibler, and Marc C. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6:37-66.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Towards the development of an automatic readability measurements for Arabic language", "authors": [ { "first": "", "middle": [], "last": "Amani A Al-Ajlan", "suffix": "" }, { "first": "S", "middle": [], "last": "Hend", "suffix": "" }, { "first": "A", "middle": [], "last": "Al-Khalifa", "suffix": "" }, { "first": "", "middle": [], "last": "Al-Salman", "suffix": "" } ], "year": 2008, "venue": "Third International Conference on", "volume": "", "issue": "", "pages": "506--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amani A Al-Ajlan, Hend S Al-Khalifa, and A Al-Salman. 2008. Towards the development of an automatic readability measurements for Arabic language. In Digital Information Management, 2008. ICDIM 2008. Third International Conference on, pages 506-511. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic Exercise Generation in an Essay Scoring System", "authors": [ { "first": "Montse", "middle": [], "last": "Itziar Aldabe", "suffix": "" }, { "first": "", "middle": [], "last": "Maritxalar", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 20th International Conference on Computers in Education", "volume": "", "issue": "", "pages": "671--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Aldabe, Montse Maritxalar, Olatz Perez de Viaspre, and Uria Larraitz. 2012. Automatic Exercise Generation in an Essay Scoring System. In Proceedings of the 20th International Conference on Computers in Education, pages 671-673.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Robustness and customisation in an analyser/lemmatiser for Basque", "authors": [ { "first": "", "middle": [], "last": "I\u00f1aki Alegria", "suffix": "" }, { "first": "Aitzol", "middle": [], "last": "Mar\u00eda Jes\u00fas Aranzabe", "suffix": "" }, { "first": "Nerea", "middle": [], "last": "Ezeiza", "suffix": "" }, { "first": "Ruben", "middle": [], "last": "Ezeiza", "suffix": "" }, { "first": "", "middle": [], "last": "Urizar", "suffix": "" } ], "year": 2002, "venue": "LREC-2002 Customizing knowledge in NLP applications workshop", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "I\u00f1aki Alegria, Mar\u00eda Jes\u00fas Aranzabe, Aitzol Ezeiza, Nerea Ezeiza, and Ruben Urizar. 2002. Robustness and cus- tomisation in an analyser/lemmatiser for Basque. In LREC-2002 Customizing knowledge in NLP applications workshop, pages 1-6, Las Palmas de Gran Canaria, May.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Representation and treatment of multiword expressions in Basque", "authors": [ { "first": "Olatz", "middle": [], "last": "I\u00f1aki Alegria", "suffix": "" }, { "first": "Xabier", "middle": [], "last": "Ansa", "suffix": "" }, { "first": "Nerea", "middle": [], "last": "Artola", "suffix": "" }, { "first": "Koldo", "middle": [], "last": "Ezeiza", "suffix": "" }, { "first": "Ruben", "middle": [], "last": "Gojenola", "suffix": "" }, { "first": "", "middle": [], "last": "Urizar", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Multiword Expressions: Integrating Processing", "volume": "", "issue": "", "pages": "48--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "I\u00f1aki Alegria, Olatz Ansa, Xabier Artola, Nerea Ezeiza, Koldo Gojenola, and Ruben Urizar. 2004a. Repre- sentation and treatment of multiword expressions in Basque. In Proceedings of the Workshop on Multiword Expressions: Integrating Processing, pages 48-55. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Design and development of a named entity recognizer for an agglutinative language", "authors": [ { "first": "Olatz", "middle": [], "last": "I\u00f1aki Alegria", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Arregi", "suffix": "" }, { "first": "Nerea", "middle": [], "last": "Balza", "suffix": "" }, { "first": "Izaskun", "middle": [], "last": "Ezeiza", "suffix": "" }, { "first": "Ruben", "middle": [], "last": "Fernandez", "suffix": "" }, { "first": "", "middle": [], "last": "Urizar", "suffix": "" } ], "year": 2004, "venue": "First International Joint Conference on NLP (IJCNLP-04). Workshop on Named Entity Recognition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I\u00f1aki Alegria, Olatz Arregi, Irene Balza, Nerea Ezeiza, Izaskun Fernandez, and Ruben Urizar. 2004b. Design and development of a named entity recognizer for an agglutinative language. In First International Joint Conference on NLP (IJCNLP-04). Workshop on Named Entity Recognition.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Readability assessment for text simplification", "authors": [ { "first": "Sandra", "middle": [], "last": "Alu\u00edsio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Gasperin", "suffix": "" }, { "first": "Carolina", "middle": [], "last": "Scarton", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra Alu\u00edsio, Lucia Specia, Caroline Gasperin, and Carolina Scarton. 2010. Readability assessment for text simplification. In Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-9. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "First Approach to Automatic Text Simplification in Basque", "authors": [ { "first": "Arantza", "middle": [], "last": "Mar\u00eda Jes\u00fas Aranzabe", "suffix": "" }, { "first": "Itziar", "middle": [], "last": "D\u00edaz De Ilarraza", "suffix": "" }, { "first": "", "middle": [], "last": "Gonzalez-Dios", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Natural Language Processing for Improving Textual Accessibility (NLP4ITA) workshop (LREC 2012)", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mar\u00eda Jes\u00fas Aranzabe, Arantza D\u00edaz de Ilarraza, and Itziar Gonzalez-Dios. 2012. First Approach to Automatic Text Simplification in Basque. In Luz Rello and Horacio Saggion, editors, Proceedings of the Natural Language Processing for Improving Textual Accessibility (NLP4ITA) workshop (LREC 2012), pages 1-8.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Transforming Complex Sentences using Dependency Trees for Automatic Text Simplification in Basque", "authors": [ { "first": "Aranzabe", "middle": [], "last": "Mar\u00eda Jes\u00fas", "suffix": "" } ], "year": 2013, "venue": "Procesamiento de Lenguaje Natural", "volume": "50", "issue": "", "pages": "61--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mar\u00eda Jes\u00fas Aranzabe, Arantza D\u00edaz de Ilarraza, and Itziar Gonzalez-Dios. 2013. Transforming Complex Sen- tences using Dependency Trees for Automatic Text Simplification in Basque. Procesamiento de Lenguaje Natural, 50:61-68.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty", "authors": [ { "first": "Rebekah", "middle": [ "George" ], "last": "Benjamin", "suffix": "" } ], "year": 2012, "venue": "Educational Psychology Review", "volume": "24", "issue": "1", "pages": "63--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebekah George Benjamin. 2012. Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty. Educational Psychology Review, 24(1):63-88.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Random Forests", "authors": [ { "first": "Leo", "middle": [], "last": "Breiman", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "45", "issue": "", "pages": "5--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leo Breiman. 2001. Random Forests. Machine Learning, 45(1):5-32.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Readability Revisited: The New DaleChall Readability Formula", "authors": [ { "first": "Jeanne", "middle": [], "last": "Sternlicht", "suffix": "" }, { "first": "Chall", "middle": [], "last": "", "suffix": "" }, { "first": "Edgar", "middle": [], "last": "Dale", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeanne Sternlicht Chall and Edgar Dale. 1995. Readability Revisited: The New DaleChall Readability Formula. Brookline Books, Cambridge, MA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Libsvm -a library for support vector machines. The Weka classifier works with version 2.82 of LIBSVM", "authors": [ { "first": "Chih-Chung", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2001. Libsvm -a library for support vector machines. The Weka classifier works with version 2.82 of LIBSVM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Chinese readability assessment using TF-IDF and SVM", "authors": [ { "first": "Yaw-Huei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yi-Han", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Yu-Ta", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2011, "venue": "Machine Learning and Cybernetics (ICMLC), 2011 International Conference on", "volume": "2", "issue": "", "pages": "705--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaw-Huei Chen, Yi-Han Tsai, and Yu-Ta Chen. 2011. Chinese readability assessment using TF-IDF and SVM. In Machine Learning and Cybernetics (ICMLC), 2011 International Conference on, volume 2, pages 705-710. IEEE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A Corpus-Based Readability Formula for Estimate of Arabic Texts Reading Difficulty", "authors": [ { "first": "Haslina", "middle": [], "last": "Nuraihan Mat Daud", "suffix": "" }, { "first": "Normaziah Abdul", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "", "middle": [], "last": "Aziz", "suffix": "" } ], "year": 2013, "venue": "World Applied Sciences Journal", "volume": "21", "issue": "", "pages": "168--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nuraihan Mat Daud, Haslina Hassan, and Normaziah Abdul Aziz. 2013. A Corpus-Based Readability Formula for Estimate of Arabic Texts Reading Difficulty. World Applied Sciences Journal, 21:168-173.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "READ-IT: assessing readability of Italian texts with a view to text simplification", "authors": [ { "first": "Felice", "middle": [], "last": "Dell'orletta", "suffix": "" }, { "first": "Simonetta", "middle": [], "last": "Montemagni", "suffix": "" }, { "first": "Giulia", "middle": [], "last": "Venturi", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, SLPAT '11", "volume": "", "issue": "", "pages": "73--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felice Dell'Orletta, Simonetta Montemagni, and Giulia Venturi. 2011. READ-IT: assessing readability of Ital- ian texts with a view to text simplification. In Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, SLPAT '11, pages 73-83, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Features indicating readability in Swedish text", "authors": [ { "first": "Johan", "middle": [], "last": "Falkenjack", "suffix": "" }, { "first": "Arne", "middle": [], "last": "Katarina Heimann M\u00fchlenbock", "suffix": "" }, { "first": "", "middle": [], "last": "J\u00f6nsson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013)", "volume": "", "issue": "", "pages": "27--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Falkenjack, Katarina Heimann M\u00fchlenbock, and Arne J\u00f6nsson. 2013. Features indicating readability in Swedish text. In Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013), pages 27-40.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A comparison of features for automatic readability assessment", "authors": [ { "first": "Lijun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Jansche", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Huenerfauth", "suffix": "" }, { "first": "No\u00e9mie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", "volume": "", "issue": "", "pages": "276--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for auto- matic readability assessment. In Proceedings of the 23rd International Conference on Computational Linguis- tics: Posters, pages 276-284. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Automatic Readability Assessment for People with Intellectual Disabilities", "authors": [ { "first": "Lijun", "middle": [], "last": "Feng", "suffix": "" } ], "year": 2009, "venue": "SIGACCESS Access. Comput", "volume": "", "issue": "93", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lijun Feng. 2009. Automatic Readability Assessment for People with Intellectual Disabilities. SIGACCESS Access. Comput., (93):84-91, January.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A new readability yardstick", "authors": [ { "first": "Rudolph", "middle": [], "last": "Flesch", "suffix": "" } ], "year": 1948, "venue": "Journal of applied psychology", "volume": "32", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "An AI readability formula for French as a foreign language", "authors": [ { "first": "Thomas", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "C\u00e9drick", "middle": [], "last": "Fairon", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "466--477", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Fran\u00e7ois and C\u00e9drick Fairon. 2012. An AI readability formula for French as a foreign language. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Compu- tational Natural Language Learning, pages 466-477. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Towards a French lexicon with difficulty measures: NLP helpnig to bridge the gap between traditional dictionaries and specialized lexicons", "authors": [ { "first": "N\u00faria", "middle": [], "last": "Gala", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Fran\u00e7ois", "suffix": "" }, { "first": "C\u00e9drick", "middle": [], "last": "Fairon", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "132--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "N\u00faria Gala, Thomas Fran\u00e7ois, and C\u00e9drick Fairon. 2013. Towards a French lexicon with difficulty measures: NLP helpnig to bridge the gap between traditional dictionaries and specialized lexicons. In I. Kosem, J. Kallas, P. Gantar, S. Krek, M. Langemets, and M. Tuulik, editors, Electronic lexicography in the 21st century: thinking outside the paper. Proceedings of the eLex 2013 conference, 17-19 October 2013, Tallinn, Estonia., pages 132- 151, Ljubljana/Tallinn. Trojina, Institute for Applied Slovene Studies/Eesti Keele Instituut.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Detecting Apposition for Text Simplification in Basque", "authors": [ { "first": "Itziar", "middle": [], "last": "Gonzalez-Dios", "suffix": "" }, { "first": "Mar\u00eda", "middle": [], "last": "Jes\u00fas Aranzabe", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "", "issue": "", "pages": "513--524", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Gonzalez-Dios, Mar\u00eda Jes\u00fas Aranzabe, Arantza D\u00edaz de Ilarraza, and Ander Soraluze. 2013. Detecting Apposition for Text Simplification in Basque. In Computational Linguistics and Intelligent Text Processing, pages 513-524. Springer.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Euskarazko egitura sintaktikoen azterketa testuen sinplifikazio automatikorako: Aposizioak, erlatibozko perpausak eta denborazko perpausak", "authors": [ { "first": "Itziar", "middle": [], "last": "Gonzalez-Dios", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itziar Gonzalez-Dios. 2011. Euskarazko egitura sintaktikoen azterketa testuen sinplifikazio automatikorako: Apo- sizioak, erlatibozko perpausak eta denborazko perpausak. Master's thesis, University of the Basque Country (UPV/EHU).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Coh-Metrix: Analysis of text on cohesion and language. Behavior Research Methods", "authors": [ { "first": "Arthur", "middle": [ "C" ], "last": "Graesser", "suffix": "" }, { "first": "Danielle", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" }, { "first": "Max", "middle": [ "M" ], "last": "Louwerse", "suffix": "" }, { "first": "Zhiqiang", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2004, "venue": "", "volume": "36", "issue": "", "pages": "193--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur C. Graesser, Danielle S. McNamara, Max M. Louwerse, and Zhiqiang Cai. 2004. Coh-Metrix: Analysis of text on cohesion and language. Behavior Research Methods, 36(2):193-202.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The technique of clear writing", "authors": [ { "first": "Robert", "middle": [], "last": "Gunning", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Gunning. 1968. The technique of clear writing. McGraw-Hill New York.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The WEKA data mining software: an update", "authors": [ { "first": "Mark", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Holmes", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Pfahringer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Reutemann", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2009, "venue": "ACM SIGKDD Explorations Newsletter", "volume": "11", "issue": "1", "pages": "10--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10-18.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Readability Classification for German using lexical, syntactic, and morphological features", "authors": [ { "first": "Julia", "middle": [], "last": "Hancke", "suffix": "" }, { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2012, "venue": "COLING 2012: Technical Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hancke, Sowmya Vajjala, and Detmar Meurers. 2012. Readability Classification for German using lexical, syntactic, and morphological features. In COLING 2012: Technical Papers, page 10631080.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Estimating Continuous Distributions in Bayesian Classifiers", "authors": [ { "first": "H", "middle": [], "last": "George", "suffix": "" }, { "first": "Pat", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Langley", "suffix": "" } ], "year": 1995, "venue": "Eleventh Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "338--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "George H. John and Pat Langley. 1995. Estimating Continuous Distributions in Bayesian Classifiers. In Eleventh Conference on Uncertainty in Artificial Intelligence, pages 338-345, San Mateo. Morgan Kaufmann.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Chinese Readability Analysis and its Applications on the Internet", "authors": [ { "first": "Pang", "middle": [], "last": "Lau Tak", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lau Tak Pang. 2006. Chinese Readability Analysis and its Applications on the Internet. Ph.D. thesis, The Chinese University of Hong Kong.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A machine learning approach to reading level assessment", "authors": [ { "first": "Sarah", "middle": [ "E" ], "last": "Petersen", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2009, "venue": "Computer Speech & Language", "volume": "23", "issue": "1", "pages": "89--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah E. Petersen and Mari Ostendorf. 2009. A machine learning approach to reading level assessment. Computer Speech & Language, 23(1):89-106.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Revisiting readability: A unified framework for predicting text quality", "authors": [ { "first": "Emily", "middle": [], "last": "Pitler", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "186--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 186-195. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Fast Training of Support Vector Machines using Sequential Minimal Optimization", "authors": [ { "first": "C", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Platt", "suffix": "" } ], "year": 1998, "venue": "Advances in Kernel Methods-Support Vector Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John C. Platt. 1998. Fast Training of Support Vector Machines using Sequential Minimal Optimization. In Bernhard Schlkopf, Christopher J. C Burges, and Alexander J. Smola, editors, Advances in Kernel Methods- Support Vector Learning. MIT Press.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "C4.5: Programs for Machine Learning", "authors": [ { "first": "Ross", "middle": [], "last": "Quinlan", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Automatic Assessment of Japanese Text Readability Based on a Textbook Corpus", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Suguru", "middle": [], "last": "Matsuyoshi", "suffix": "" }, { "first": "Yohsuke", "middle": [], "last": "Kondoh", "suffix": "" }, { "first": ";", "middle": [], "last": "", "suffix": "" }, { "first": "Khalid", "middle": [], "last": "Choukri", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Maegaard", "suffix": "" }, { "first": "Benteand", "middle": [], "last": "Mariani", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Sato, Suguru Matsuyoshi, and Yohsuke Kondoh. 2008. Automatic Assessment of Japanese Text Readabil- ity Based on a Textbook Corpus. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Joseph Maegaard, Benteand Mariani, Jan Odijk, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the Sixth Interna- tional Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco, may. European Language Resources Association (ELRA).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An\u00e1lise da Inteligibilidade de textos via ferramentas de Processamento de L\u00edngua Natural: adaptando as m\u00e9tricas do Coh-Metrix para o", "authors": [ { "first": "Carolina", "middle": [ "Evaristo" ], "last": "Scarton", "suffix": "" }, { "first": "Sandra", "middle": [ "Maria" ], "last": "Alu\u00edsio", "suffix": "" } ], "year": 2010, "venue": "Portugu\u00eas. Linguam\u00e1tica", "volume": "2", "issue": "1", "pages": "45--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carolina Evaristo Scarton and Sandra Maria Alu\u00edsio. 2010. An\u00e1lise da Inteligibilidade de textos via ferramentas de Processamento de L\u00edngua Natural: adaptando as m\u00e9tricas do Coh-Metrix para o Portugu\u00eas. Linguam\u00e1tica, 2(1):45-61.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Reading level assessment using support vector machines and statistical language models", "authors": [ { "first": "E", "middle": [], "last": "Sarah", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Schwarm", "suffix": "" }, { "first": "", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah E Schwarm and Mari Ostendorf. 2005. Reading level assessment using support vector machines and statistical language models. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 523-530. Association for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A statistical model for scientific readability", "authors": [ { "first": "Luo", "middle": [], "last": "Si", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the tenth international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "574--576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luo Si and Jamie Callan. 2001. A statistical model for scientific readability. In Proceedings of the tenth interna- tional conference on Information and knowledge management, pages 574-576. ACM.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "New Readability Measures for Bangla and Hindi Texts", "authors": [ { "first": "Manjira", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Sakshi", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Tirthankar", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Anupam", "middle": [], "last": "Basu", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "1141--1150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manjira Sinha, Sakshi Sharma, Tirthankar Dasgupta, and Anupam Basu. 2012. New Readability Measures for Bangla and Hindi Texts. In Proceedings of COLING 2012: Posters, pages 1141-1150, Mumbai, India, Decem- ber. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Probability as readability: A new machine learning approach to readability assessment for written Swedish. Master's thesis", "authors": [ { "first": "Johan", "middle": [], "last": "Sj\u00f6holm", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Sj\u00f6holm. 2012. Probability as readability: A new machine learning approach to readability assessment for written Swedish. Master's thesis, Link\u00f6ping.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A readability checker with supervised learning using deep indicators", "authors": [ { "first": "Tim", "middle": [], "last": "Vor Der Br\u00fcck", "suffix": "" }, { "first": "Sven", "middle": [], "last": "Hartrumpf", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Helbig", "suffix": "" } ], "year": 2008, "venue": "Informatica", "volume": "32", "issue": "4", "pages": "429--435", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim vor der Br\u00fcck, Sven Hartrumpf, and Hermann Helbig. 2008. A readability checker with supervised learning using deep indicators. Informatica, 32(4):429-435.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Readability Indices for Automatic Evaluation of Text Simplification Systems: A Feasibility Study for Spanish", "authors": [ { "first": "Horacio", "middle": [], "last": "Sanja\u0161tajner", "suffix": "" }, { "first": "", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "374--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanja\u0160tajner and Horacio Saggion. 2013. Readability Indices for Automatic Evaluation of Text Simplification Systems: A Feasibility Study for Spanish. In Proceedings of the Sixth International Joint Conference on Nat- ural Language Processing, pages 374-382, Nagoya, Japan, October. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Readability of texts: State of the art. Theory and Practice in Language Studies", "authors": [ { "first": "Mostafa", "middle": [], "last": "Zamanian", "suffix": "" }, { "first": "Pooneh", "middle": [], "last": "Heydari", "suffix": "" } ], "year": 2012, "venue": "", "volume": "2", "issue": "", "pages": "43--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mostafa Zamanian and Pooneh Heydari. 2012. Readability of texts: State of the art. Theory and Practice in Language Studies, 2(1):43-53.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Each type of conjunction / all the conjunctions Each type of sentence connector / all the sentence connectors" }, "TABREF0": { "num": null, "text": ".", "type_str": "table", "content": "
Corpus Docs. Sentences Tokens Verbs Nouns
T-comp2008593161161 52229 59510
T-simp20023633956512203 13447
", "html": null }, "TABREF1": { "num": null, "text": "Corpora statisticsBoth corpora were analysed at various levels:", "type_str": "table", "content": "", "html": null }, "TABREF2": { "num": null, "text": "", "type_str": "table", "content": "
", "html": null }, "TABREF3": { "num": null, "text": "", "type_str": "table", "content": "
", "html": null }, "TABREF4": { "num": null, "text": "Morphological features", "type_str": "table", "content": "
", "html": null }, "TABREF5": { "num": null, "text": "Morpho-syntactic features", "type_str": "table", "content": "
", "html": null }, "TABREF7": { "num": null, "text": "Syntactic features", "type_str": "table", "content": "
", "html": null }, "TABREF8": { "num": null, "text": "Pragmatic features Conjunction types are additive, adversative and disjuntive. Sentence connector types are additive, adversative, disjuntive, clarificative, causal, consecutive, concessive and modal.", "type_str": "table", "content": "
", "html": null }, "TABREF10": { "num": null, "text": "Classification results using all the features", "type_str": "table", "content": "
", "html": null }, "TABREF13": { "num": null, "text": "Classification results using different feature combinations", "type_str": "table", "content": "
", "html": null }, "TABREF14": { "num": null, "text": "Most predictive features", "type_str": "table", "content": "
Random ForestJ48IBkNa\u00efve Bayes SMO
87.7588.25 72.0083.2587.00
", "html": null } } } }