{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:44.503532Z" }, "title": "Towards a Data Analytics Pipeline for the Visualisation of Complexity Metrics in L2 writings", "authors": [ { "first": "Thomas", "middle": [], "last": "Gaillat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University", "location": { "settlement": "Rennes" } }, "email": "thomas.gaillat@univ-rennes2.fr" }, { "first": "Anas", "middle": [], "last": "Knefati", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Antoine", "middle": [], "last": "Lafontaine", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present the design of a tool for the visualisation of linguistic complexity in second language (L2) learner writings. We show how metrics can be exploited to visualise complexity in L2 writings in relation to CEFR levels.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present the design of a tool for the visualisation of linguistic complexity in second language (L2) learner writings. We show how metrics can be exploited to visualise complexity in L2 writings in relation to CEFR levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The analysis of educational data has been a growing field in the last decade. Learning Content Management Software (LCMS) platforms in education have provided the opportunity to collect and process large quantities of educational data supporting both data mining and analytics (Baker et al., 2016) . As far as we know, the field of foreign language learning has not availed yet of projects with data analytics at their core. The proceedings from the Visualisation and Digital Humanities workshop series 1 and those of the Learning Analytics and Knowledge conference 2 fall short of studies focused on the automatic exploitation of linguistic data for learners of a language. This problem may be linked to the complexity of apprehending learner writings due to the multidimensional nature of this type of language (errors, usage, phraseology ...) One way to approach the problem is to use data analytics methods in order to bridge the gap between the collection of learner productions and their automatic analysis resulting in meaningful feedback. To this end, it is necessary to identify quantifiable features of learner writings. Data could be sourced in one of the three dimensions of language proficiency, i.e., Complexity, Accuracy and Fluency (CAF) . A data analytics framework could rely on measures that operationalise these three theoretical constructs. A selection of CAF measures could be the source of automatically generated linguistic profile reports of L2 writings.", "cite_spans": [ { "start": 277, "end": 297, "text": "(Baker et al., 2016)", "ref_id": "BIBREF1" }, { "start": 813, "end": 845, "text": "(errors, usage, phraseology ...)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As part of CAF, linguistic complexity is one of the constructs that lends itself well to computational methods. At theoretical level it informs on the elaborateness of the learner language. At operational level there are a number of statistical measures in the form of frequencies, ratios and indices (Bult\u00e9 and Housen, 2012) . The construct is already used in combination with corpora to achieve different tasks such as automatic proficiency level prediction. In these tasks, complexity metrics are exploited with supervised learning methods to predict levels (Ballier et al., 2020; Venant and D'Aquin, 2019; Pil\u00e1n and Volodina, 2018; Yannakoudakis et al., 2011) .Among all the metrics that have been tested (see (Bult\u00e9 and Housen, 2012, p.31-33 ) for a review), several have been reported as predictive of proficiency (Kyle, 2016; Lu, 2012; Vajjala and Meurers, 2012) . Some readability metrics have also been used in L2 studies (Liss\u00f3n, 2017; Pil\u00e1n et al., 2014) .", "cite_spans": [ { "start": 301, "end": 325, "text": "(Bult\u00e9 and Housen, 2012)", "ref_id": "BIBREF6" }, { "start": 561, "end": 583, "text": "(Ballier et al., 2020;", "ref_id": "BIBREF3" }, { "start": 584, "end": 609, "text": "Venant and D'Aquin, 2019;", "ref_id": "BIBREF27" }, { "start": 610, "end": 635, "text": "Pil\u00e1n and Volodina, 2018;", "ref_id": "BIBREF20" }, { "start": 636, "end": 663, "text": "Yannakoudakis et al., 2011)", "ref_id": "BIBREF28" }, { "start": 714, "end": 746, "text": "(Bult\u00e9 and Housen, 2012, p.31-33", "ref_id": null }, { "start": 820, "end": 832, "text": "(Kyle, 2016;", "ref_id": "BIBREF13" }, { "start": 833, "end": 842, "text": "Lu, 2012;", "ref_id": "BIBREF16" }, { "start": 843, "end": 869, "text": "Vajjala and Meurers, 2012)", "ref_id": "BIBREF26" }, { "start": 931, "end": 945, "text": "(Liss\u00f3n, 2017;", "ref_id": "BIBREF14" }, { "start": 946, "end": 965, "text": "Pil\u00e1n et al., 2014)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of text analysis tools exist in education but are not focused on L2 learning. They provide environments for reading or writing assessment and training (Dascalu et al., 2013; Mc-Namara et al., 2007; Roscoe et al., 2014; Attali and Burstein, 2006; Napolitano et al., 2015) . They focus on providing quantified results in relation to internal scales for first language (L1) learning. In addition, and to the best of our knowledge, the tools do not provide visualisations for the textual measurements. In the field of L2 learning a tool called FeedBook (Rudzewitz et al., 2019) provides visualisations of linguistic features as part of the feedback given to students. One need that remains to be addressed is the ability for learners to position the linguistic properties of their productions with regard to proficiency levels. Our proposal is to exploit state-of-the-art linguistic complexity metrics in the automatic analysis of L2 writings. NLP tools are used to annotate, compute metrics and display visualisations of learner productions compared with writings classified according to the CEFR 3 levels (European Council, 2001) . Section 2 describes the method. Section 3 covers visualisation interpretation. Learner engagement is presented in section 4. We discuss issues and perspectives in Section 5.", "cite_spans": [ { "start": 160, "end": 182, "text": "(Dascalu et al., 2013;", "ref_id": "BIBREF7" }, { "start": 183, "end": 206, "text": "Mc-Namara et al., 2007;", "ref_id": null }, { "start": 207, "end": 227, "text": "Roscoe et al., 2014;", "ref_id": "BIBREF23" }, { "start": 228, "end": 254, "text": "Attali and Burstein, 2006;", "ref_id": "BIBREF0" }, { "start": 255, "end": 279, "text": "Napolitano et al., 2015)", "ref_id": "BIBREF19" }, { "start": 558, "end": 582, "text": "(Rudzewitz et al., 2019)", "ref_id": "BIBREF24" }, { "start": 1122, "end": 1136, "text": "Council, 2001)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To compare new texts with existing texts, we exploit a learner corpus of written productions. Texts from English for Specific Purposes (ESP) university students are used. This corpus includes 274 third-level education writings. Two language certification experts assessed the writings in terms of CEFR proficiency levels. The first production task for learners consisted in describing an experiment/discovery/invention/technology of their choice and the second task was to give their opinion on the impact of the previously described item. Learners had 45 minutes to complete both tasks. Table 1 shows the breakdown of the texts according to the CEFR levels.", "cite_spans": [], "ref_spans": [ { "start": 588, "end": 595, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Methodology 2.1 A CEFR-based reference data set", "sec_num": "2" }, { "text": "CEFR annotation was evaluated with a measurement of inter-rater agreement (Cohen's weighted Kappa = 0.71). Complexity metrics were computed and six subsets or cohorts were created according to the six CEFR levels. A comparative data set of metrics and CEFR levels was thus created 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 A CEFR-based reference data set", "sec_num": "2" }, { "text": "Three groups of metrics are computed at processing time. Syntactic complexity is operationalised with fourteen metrics. These metrics are grouped in five different types (Lu, 2014) : Length of production unit (e.g. sentence), sentence complexity, subordination, coordination and particular structures (e.g.complex nominals). Each metric is a ratio of a frequency of a constituent over the frequency of all constituents of a higher-level scope.", "cite_spans": [ { "start": 170, "end": 180, "text": "(Lu, 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2.2" }, { "text": "Readability is operationalised with forty eight metrics. They are based on the morphological features of words used to compute different indicator values. The assumption is that indicators operationalise the level of maturity required for reading a specific text. This includes indicators such as the Coleman Liau, the Dale Chall readability score and the Flesch kincaid grade. They all rely on word length in terms of characters and syllables as well as predetermined lists of words judged as difficult 5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2.2" }, { "text": "Lexical richness is operationalised with thirteen metrics which provide information on lexical diversity, i.e., the range of different words used in a text. Two types of lexical diversity are included. Diversity based on word type variation is accounted for with TTR based formulae. Diversity based on type repetition is accounted for with Yule's K and similar formulae in which the frequency of word types in a sample of size n is relative to the total number of words in a text 6 . We acknowledge that lexical sophistication and lexical density (content vs grammar words) are not taken into account.", "cite_spans": [ { "start": 480, "end": 481, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2.2" }, { "text": "The metrics were selected for two reasons. Firstly, their significance is reported in the literature on L2 criterial features (Hawkins and Filipovi\u0107, 2012; Lu, 2014; Kyle, 2016; Liss\u00f3n, 2017) and analysing it in terms of CEFR levels is outside the scope of this paper. Secondly, it was decided to also include metrics linked to descriptive syntactic information. Complex Nominal and Coordinated Phrase indices were selected due to their meaningfulness. In total, 83 metrics are computed 7 .", "cite_spans": [ { "start": 126, "end": 155, "text": "(Hawkins and Filipovi\u0107, 2012;", "ref_id": "BIBREF11" }, { "start": 156, "end": 165, "text": "Lu, 2014;", "ref_id": "BIBREF17" }, { "start": 166, "end": 177, "text": "Kyle, 2016;", "ref_id": "BIBREF13" }, { "start": 178, "end": 191, "text": "Liss\u00f3n, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "2.2" }, { "text": "The learners' productions are collected via two types of MOODLE activities (Dougiamas and Taylor, 2003) , i.e., Assignment and Database. The Assignment activity allows teachers to collect written assignments as they see fit within their course scenario. They can download all the assignments as a batch file and transfer them as input into the data processing pipeline. The texts can also be collected via a learner-corpus building interface alongside student metadata. A file includes all the texts and metadata and can be imported into the pipeline.", "cite_spans": [ { "start": 75, "end": 103, "text": "(Dougiamas and Taylor, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection and cleaning", "sec_num": "2.3" }, { "text": "Prior to processing the files, the texts are cleaned. All special characters are deleted. Punctuation symbols are spaced consistently. Accents (from expressions of other languages for instance) are removed. The pronoun \"I\" is upper-cased for each text. The negative modal verb \"can't\" is replaced by \"cannot\". It permits to ensure a better parsing and a more accurate computation of the metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection and cleaning", "sec_num": "2.3" }, { "text": "The pipeline 8 is implemented in R (R Core Team, 2012) with a Creative Commons Share Alike licence. It includes our R implementation of L2SCA 9 (Lu, 2010) for syntactic complexity metrics. It also relies on Quanteda (Benoit et al., 2018) an R package used to compute readability and lexical diversity with textstat\\_lexdiv() and textstat\\_readability().", "cite_spans": [ { "start": 216, "end": 237, "text": "(Benoit et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The pipeline", "sec_num": "2.4" }, { "text": "The data workflow functions as described in Figure 1 . Firstly, the input data is made up of new learner texts which are passed through the aforementioned processing tools to compute the metrics. Secondly, the reference corpus mentioned in Section 2.1 is also passed through the same processing tools. As a result, new texts can be compared with existing texts on the basis of the computed metrics. These can be visualised as of box-plots and radar charts.", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 52, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The pipeline", "sec_num": "2.4" }, { "text": "Prior to displaying metric values to users, these values are transformed to ensure comparability. First all the metric values are normalised to constrain them in a [0,1] interval for the radar chart. Yule's K is transformed into its inverse for the radar chart to avoid confusions by learners. This is because, as opposed to all other indicators, K's values drop as CEFR levels get higher. All the normalised indicators finally displayed, show increasing values as CEFR levels increase. In terms of statistics, the median and a shaded-grey strip for first and third quartiles are used to describe the control cohorts. Using an interval aims to show the variability of a 8 Available at https://github.com/LIDILE/ VizLing 9 Available at http://www.personal.psu.edu/ xxl13/downloads/l2sca.html metric within a CEFR level. Using the mean was not favoured to ensure robustness to outliers. Provision is also made for the rare cases in which metric values fall out of the interval. In this case, the value is not visualised on the graph and a warning is displayed: \"You are off radar for the following indicators:\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data transformation for visualisations", "sec_num": "2.5" }, { "text": "In this section, we conduct an illustrative analysis of a sample text and compare some of its features with the visualised metrics. It was written by a French learner of English as part of the French National Language Certification Proficiency exam 10 . It was classified as B2 or higher. For reasons of space, we only provide the following exerpt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting visualisations", "sec_num": "3" }, { "text": "With the development of new technologies such as smartphones, new questions are being caused about how to evaluate students. Indeed, using cellphones to cheat is common in highschools. The first question we have to ask ourselves is wether we should authorize students to access their phone or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting visualisations", "sec_num": "3" }, { "text": "Arguments against are well-know: ... But we also have to consider arguments in favor of it, in order to do what is best for our students. First, they will be working with these technologies in their professional lives, and we should be preparing them for that, by teaching them a proper use of smartphones and computers...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting visualisations", "sec_num": "3" }, { "text": "In Figure 2 the metrics are compared with those of the cohort of B2 learners (see Section 2.1). The learner's individual report is divided in two parts. On the left, a radar chart displays ratio-based metrics and, on the right, raw frequencies are reported. In addition to the metric acronym a categorisation label is provided in order to indicate the word, sentence or text scope (Anonymised reference 2019b) of the metric. For instance the Number of Different Words (NDW) is labeled Text.size.type and can be interpreted at text level in terms of types as a unit. The radar chart displays the cohort in a greyshaded area representing the two central quartiles. The dark line represents the median of the cohort. The boxplots show the position of the learner's metric in relation to the full B2 cohort.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Interpreting visualisations", "sec_num": "3" }, { "text": "The indicators in the radar chart show that the learner's ratios globally correspond to those of the B2 cohort. For instance, the learner makes use of Complex Nominals in her text. This includes 10 See CLES certification at https://www. certification-cles.fr/en/ Figure 1 : NLP pipeline -from data collection to visualisation Figure 2 : Individual report of a learner in comparison with a cohort of B2 learners the adverbial clause \"With the development of new technologies such as smartphones\" which is used in apposition to the main clause. It also includes the nominal clause \"The first question we have to ask ourselves\" used in subject position of BE. The use of adjective + noun as in \"proper use\" is another more simple example of nominal complexity. The three different cases are all accounted for by the system. It appears that the learner's level of use is slightly under the B2 median (C1 and C2 radar charts show even higher values for the two central quartiles). The teacher and learner can analyse such structures more into details. The teacher could in turn note the lack of use of compounds and genitives in the text. In short, the metric helps learners and teachers identify an issue related to an objective criterion of linguistic complexity. Specific feedback and actions can then be undertaken.", "cite_spans": [ { "start": 195, "end": 197, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 263, "end": 271, "text": "Figure 1", "ref_id": null }, { "start": 326, "end": 334, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Interpreting visualisations", "sec_num": "3" }, { "text": "The efficacy of the tool needs to be evaluated. Learner engagement remains to be assessed thor-oughly but a preliminary qualitative assessment was conducted in a class setting environment. We show results about the impact of the tool on learners' engagement. Fifty-four first year higher-level students were given five individual writing tasks in five weekly waves. After each wave they were provided feedback within 24 hours. Notwithstanding the results, we measured the number of submitted writings (Figure 3 ) and the frequency of consultation of feedback reports (Figure 4) . We use these measurements as a rough proxy to measuring learner engagement, i.e. how they respond to the feedback they receive (Ellis, 2010). The statistics are assumed to tap into the intensity of learners' interest in the reports. Over time the number of submitted papers did not decrease in spite of the lockdown imposed on students in the midst of the COVID crisis. Following detailed explanations from their teacher to ensure comprehension, a majority of students consulted their reports three times or more, showing continuous interest. In this paper we have presented a linguistic complexity visualisation tool. It displays learner writings according to several criteria and positions them in relation to cohorts of specific CEFR levels. More work remains to be done. Firstly, the visualizations used may be difficult to understand for learners who are not used to such types as radar charts. The tool aims primarily at helping trained language teachers analyse their students' writings in order to give them objective and specific feedback (Shute, 2008) . By gaining access to these features, teachers can give specific answers regarding the mastery of certain concepts. They also become aware of features of language use that need to be addressed. Teachers can then provide evidence-based advice.", "cite_spans": [ { "start": 1628, "end": 1641, "text": "(Shute, 2008)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 501, "end": 510, "text": "(Figure 3", "ref_id": "FIGREF0" }, { "start": 567, "end": 577, "text": "(Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "Secondly, the collected data shows limitations. The metrics on which the visualisations rely need to be evaluated on the data in terms of proficiency predictive power. Correlation analysis remains to be conducted in order to validate significant metrics to be displayed. The reference corpus is small and at the same time lacks diversity. All the texts belong to university students of specific fields, which may impact vocabulary and syntactic structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "More data needs to be collected in each field in order to support finer-grained analysis of third level education writings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "One last limitation is that some metrics remain difficult to interpret linguistically as argued by (Biber et al., 2020) . For instance, readability formulas combine different features such as morphology and most common words. Specific advice on one of the features is therefore near impossible. Nevertheless, by interpreting the linguistic scopes (whether the measures apply at word, sentence or text level), it is possible to provide a certain degree of feedback.", "cite_spans": [ { "start": 99, "end": 119, "text": "(Biber et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "The tool could be exploited in the learner module of an Intelligent Tutoring System dedicated to language learning. Because linguistic complexity measurements keep track of the evolution of systemic syntactic and lexical complexity, these data constitute part of the knowledge representation of the learner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "The tool gives access to learning analytics at linguistic level. In a context of distance learning, teachers are empowered with a rapid diagnostics tool that gives them an objective, although reduced, view of some of the features of their learners' language. Further developments will focus on identifying and evaluating more significant metrics in terms of proficiency and meta-linguistic influence on learners. Other types of charts could also be explored, and an aggregation functionality could provide group visualisations to reveal linguistic class patterns for teachers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learner engagement", "sec_num": "4" }, { "text": "See http://vis4dh.org/ 2 See https://lak20.solaresearch.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Common European Framework of Reference in languages 4 Available from IRIS database at https://www. iris-database.org/iris/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For a detailed description of the formulae refer to https://quanteda.io/reference/textstat_ readability.html?q=reada6 For the formulae see https://quanteda.io/ reference/textstat_lexdiv.html 7 A list of metrics is available as supplementary material", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automated Essay Scoring With e-rater\u00ae V.2. The Journal of Technology", "authors": [ { "first": "Yigal", "middle": [], "last": "Attali", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2006, "venue": "Learning and Assessment", "volume": "4", "issue": "3", "pages": "3--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yigal Attali and Jill Burstein. 2006. Automated Essay Scoring With e-rater\u00ae V.2. The Journal of Technol- ogy, Learning and Assessment, 4(3):3-29.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Educational Data Mining and Learning Analytics", "authors": [ { "first": "Ryan", "middle": [ "S" ], "last": "Baker", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Lisa", "middle": [ "M" ], "last": "Rossi", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/9781118956588.ch16" ] }, "num": null, "urls": [], "raw_text": "Ryan S. Baker, Taylor Martin, and Lisa M. Rossi. 2016. Educational Data Mining and Learning Analytics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Wiley Handbook of Cognition and Assessment", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "379--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "In The Wiley Handbook of Cognition and Assess- ment, pages 379-396. John Wiley & Sons, Ltd.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Machine learning for learner English", "authors": [ { "first": "Nicolas", "middle": [], "last": "Ballier", "suffix": "" }, { "first": "St\u00e9phane", "middle": [], "last": "Canu", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Petitjean", "suffix": "" }, { "first": "Gilles", "middle": [], "last": "Gasso", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Balhana", "suffix": "" }, { "first": "Theodora", "middle": [], "last": "Alexopoulou", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Gaillat", "suffix": "" } ], "year": 2020, "venue": "International Journal of Learner Corpus Research", "volume": "6", "issue": "1", "pages": "72--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolas Ballier, St\u00e9phane Canu, Caroline Petitjean, Gilles Gasso, Carlos Balhana, Theodora Alex- opoulou, and Thomas Gaillat. 2020. Machine learn- ing for learner English. International Journal of Learner Corpus Research, 6(1):72-103.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "quanteda: An R package for the quantitative analysis of textual data", "authors": [ { "first": "Kenneth", "middle": [], "last": "Benoit", "suffix": "" }, { "first": "Kohei", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "Haiyan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Nulty", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Obeng", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Akitaka", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2018, "venue": "Journal of Open Source Software", "volume": "3", "issue": "30", "pages": "", "other_ids": { "DOI": [ "10.21105/joss.00774" ] }, "num": null, "urls": [], "raw_text": "Kenneth Benoit, Kohei Watanabe, Haiyan Wang, Paul Nulty, Adam Obeng, Stefan M\u00fcller, and Akitaka Matsuo. 2018. quanteda: An R package for the quantitative analysis of textual data. Journal of Open Source Software, 3(30):774.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Investigating grammatical complexity in L2 English writing research: Linguistic description versus predictive measurement", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" }, { "first": "Bethany", "middle": [], "last": "Gray", "suffix": "" }, { "first": "Shelley", "middle": [], "last": "Staples", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Egbert", "suffix": "" } ], "year": 2020, "venue": "Journal of English for Academic Purposes", "volume": "46", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1016/j.jeap.2020.100869" ] }, "num": null, "urls": [], "raw_text": "Douglas Biber, Bethany Gray, Shelley Staples, and Jesse Egbert. 2020. Investigating grammatical com- plexity in L2 English writing research: Linguistic description versus predictive measurement. Journal of English for Academic Purposes, 46:100869.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Defining and Operationalising L2 Complexity", "authors": [ { "first": "Bram", "middle": [], "last": "Bult\u00e9", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Housen", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bram Bult\u00e9 and Alex Housen. 2012. Defining and Op- erationalising L2 Complexity. John Benjamins Pub- lishing Company.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "ReaderBench, an Environment for Analyzing Text Complexity and Reading Strategies", "authors": [ { "first": "Mihai", "middle": [], "last": "Dascalu", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Dessus", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Trausan-Matu", "suffix": "" }, { "first": "Maryse", "middle": [], "last": "Bianco", "suffix": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "Nardy", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Dasc\u0203lu", "suffix": "" }, { "first": "S", "middle": [], "last": "Tefan Tr\u0203us", "suffix": "" } ], "year": 2013, "venue": "AIED 13 -16th International Conference on Artificial Intelligence in Education", "volume": "7926", "issue": "", "pages": "379--388", "other_ids": { "DOI": [ "10.1007/978-3-642-39112-5_39" ] }, "num": null, "urls": [], "raw_text": "Mihai Dascalu, Philippe Dessus, Stefan Trausan-Matu, Maryse Bianco, Aur\u00e9lie Nardy, Mihai Dasc\u0203lu, and S , tefan Tr\u0203us , an-Matu. 2013. ReaderBench, an Envi- ronment for Analyzing Text Complexity and Read- ing Strategies. In AIED 13 -16th International Conference on Artificial Intelligence in Education, volume 7926 of Lecture Notes in Computer Sci- ence (LNCS), pages 379-388, Memphis, TN, United States. Springer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Moodle: Using Learning Communities to Create an Open Source Course Management System", "authors": [ { "first": "Martin", "middle": [], "last": "Dougiamas", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Taylor", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the EDMEDIA 2003 Conference", "volume": "", "issue": "", "pages": "171--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Dougiamas and Peter Taylor. 2003. Moodle: Using Learning Communities to Create an Open Source Course Management System. In Proceed- ings of the EDMEDIA 2003 Conference, Honolulu, Hawaii, pages 171-178, Hawaii. Association for the Advancement of Computing in Education (AACE).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "EPILOGUE: A Framework for Investigating Oral and Written Corrective Feedback", "authors": [ { "first": "Rod", "middle": [ "Ellis" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Studies in Second Language Acquisition", "volume": "32", "issue": "2", "pages": "335--349", "other_ids": { "DOI": [ "10.1017/S0272263109990544" ] }, "num": null, "urls": [], "raw_text": "Rod Ellis. 2010. EPILOGUE: A Framework for In- vestigating Oral and Written Corrective Feedback. Studies in Second Language Acquisition, 32(2):335- 349. Publisher: Cambridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Common European Framework of Reference for Languages: Learning, teaching, assessment", "authors": [ { "first": "European", "middle": [], "last": "Council", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "European Council. 2001. Common European Frame- work of Reference for Languages: Learning, teach- ing, assessment. Cambridge University Press, Cam- bridge.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Criterial Features in L2 English: Specifying the Reference Levels of the Common European Framework", "authors": [ { "first": "John", "middle": [ "A" ], "last": "Hawkins", "suffix": "" }, { "first": "Luna", "middle": [], "last": "Filipovi\u0107", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John A. Hawkins and Luna Filipovi\u0107. 2012. Criterial Features in L2 English: Specifying the Reference Levels of the Common European Framework, vol- ume 1 of English Profile Studies. Cambridge Uni- versity Press, United Kingdom.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dimensions of L2 performance and proficiency: complexity, accuracy and fluency in SLA", "authors": [ { "first": "Alex", "middle": [], "last": "Housen", "suffix": "" }, { "first": "Folkert", "middle": [], "last": "Kuiken", "suffix": "" }, { "first": "Ineke", "middle": [], "last": "Vedder", "suffix": "" } ], "year": 2012, "venue": "Language Learning & Language Teaching", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Housen, Folkert Kuiken, and Ineke Vedder, ed- itors. 2012. Dimensions of L2 performance and proficiency: complexity, accuracy and fluency in SLA, volume 32 of Language Learning & Language Teaching (LL<). John Benjamins Publishing Company, Amsterdam, The Netherlands, USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Measuring Syntactic Development in L2 Writing: Fine Grained Indices of Syntactic Complexity and Usage-Based Indices of Syntactic Sophistication", "authors": [ { "first": "Kristopher", "middle": [], "last": "Kyle", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristopher Kyle. 2016. Measuring Syntactic Develop- ment in L2 Writing: Fine Grained Indices of Syntac- tic Complexity and Usage-Based Indices of Syntac- tic Sophistication. Ph.D. thesis, Georgia State Uni- versity.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Investigating the use of readability metrics to detect differences in written productions of learners : a corpus-based study. Bellaterra journal of teaching and learning language and literature", "authors": [ { "first": "Paula", "middle": [], "last": "Liss\u00f3n", "suffix": "" } ], "year": 2017, "venue": "", "volume": "10", "issue": "", "pages": "68--86", "other_ids": { "DOI": [ "10.5565/rev/jtl3.752" ] }, "num": null, "urls": [], "raw_text": "Paula Liss\u00f3n. 2017. Investigating the use of readability metrics to detect differences in written productions of learners : a corpus-based study. Bellaterra jour- nal of teaching and learning language and literature, 10(4):0068-86.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Automatic analysis of syntactic complexity in second language writing", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2010, "venue": "International Journal of Corpus Linguistics", "volume": "15", "issue": "4", "pages": "474--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2010. Automatic analysis of syntactic com- plexity in second language writing. International Journal of Corpus Linguistics, 15(4):474-496.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Relationship of Lexical Richness to the Quality of ESL Learners' Oral Narratives. The Modern Language", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2012, "venue": "Journal", "volume": "96", "issue": "2", "pages": "190--208", "other_ids": { "DOI": [ "10.1111/j.1540-4781.2011.01232_1.x" ] }, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2012. The Relationship of Lexical Rich- ness to the Quality of ESL Learners' Oral Narratives. The Modern Language Journal, 96(2):190-208.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Computational Methods for Corpus Annotation and Analysis", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaofei Lu. 2014. Computational Methods for Corpus Annotation and Analysis. Springer, Dordrecht.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Evaluating selfexplanations in iSTART: Comparing word-based and LSA algorithms. In Handbook of latent semantic analysis", "authors": [ { "first": "Danielle", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" }, { "first": "Chutima", "middle": [], "last": "Boonthum", "suffix": "" }, { "first": "Irwin", "middle": [], "last": "Levinstein", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Millis", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "227--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danielle S. McNamara, Chutima Boonthum, Irwin Levinstein, and Keith Millis. 2007. Evaluating self- explanations in iSTART: Comparing word-based and LSA algorithms. In Handbook of latent seman- tic analysis, pages 227-241. Lawrence Erlbaum As- sociates Publishers, Mahwah, NJ, US.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Online Readability and Text Complexity Analysis with TextEvaluator", "authors": [ { "first": "Diane", "middle": [], "last": "Napolitano", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Sheehan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Mundkowsky", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "96--100", "other_ids": { "DOI": [ "10.3115/v1/N15-3020" ] }, "num": null, "urls": [], "raw_text": "Diane Napolitano, Kathleen Sheehan, and Robert Mundkowsky. 2015. Online Readability and Text Complexity Analysis with TextEvaluator. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Demonstrations, pages 96-100, Denver, Colorado. Association for Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Investigating the importance of linguistic complexity features across different datasets related to language learning", "authors": [ { "first": "Ildik\u00f3", "middle": [], "last": "Pil\u00e1n", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Volodina", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Linguistic Complexity and Natural Language Processing", "volume": "", "issue": "", "pages": "49--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ildik\u00f3 Pil\u00e1n and Elena Volodina. 2018. Investigat- ing the importance of linguistic complexity features across different datasets related to language learning. In Proceedings of the Workshop on Linguistic Com- plexity and Natural Language Processing, pages 49- 58, Santa Fe, New-Mexico. Association for Compu- tational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Rule-based and machine learning approaches for second language sentence-level readability", "authors": [ { "first": "Elena", "middle": [], "last": "Ildik\u00f3 Pil\u00e1n", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Volodina", "suffix": "" }, { "first": "", "middle": [], "last": "Johansson", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "174--184", "other_ids": { "DOI": [ "10.3115/v1/W14-1821" ] }, "num": null, "urls": [], "raw_text": "Ildik\u00f3 Pil\u00e1n, Elena Volodina, and Richard Johansson. 2014. Rule-based and machine learning approaches for second language sentence-level readability. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications, pages 174-184, Baltimore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "R: A language and environment for statistical computing. R Foundation for Statistical Computing", "authors": [ { "first": "", "middle": [], "last": "R Core Team", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R Core Team. 2012. R: A language and environment for statistical computing. R Foundation for Statisti- cal Computing, Vienna, Austria.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Writing Pal Intelligent Tutoring System: Usability Testing and Development. Computers and Composition", "authors": [ { "first": "Rod", "middle": [ "D" ], "last": "Roscoe", "suffix": "" }, { "first": "Laura", "middle": [ "K" ], "last": "Allen", "suffix": "" }, { "first": "Jennifer", "middle": [ "L" ], "last": "Weston", "suffix": "" }, { "first": "Scott", "middle": [ "A" ], "last": "Crossley", "suffix": "" }, { "first": "Danielle", "middle": [ "S" ], "last": "Mcnamara", "suffix": "" } ], "year": 2014, "venue": "", "volume": "34", "issue": "", "pages": "39--59", "other_ids": { "DOI": [ "10.1016/j.compcom.2014.09.002" ] }, "num": null, "urls": [], "raw_text": "Rod D. Roscoe, Laura K. Allen, Jennifer L. Weston, Scott A. Crossley, and Danielle S. McNamara. 2014. The Writing Pal Intelligent Tutoring System: Usabil- ity Testing and Development. Computers and Com- position, 34:39-59.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Enhancing a Web-based Language Tutoring System with Learning Analytics", "authors": [ { "first": "Bj\u00f6rn", "middle": [], "last": "Rudzewitz", "suffix": "" }, { "first": "Ramon", "middle": [], "last": "Ziai", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Nuxoll", "suffix": "" }, { "first": "Kordula", "middle": [ "De" ], "last": "Kuthy", "suffix": "" }, { "first": "Walt", "middle": [ "Detmar" ], "last": "Meurers", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Workshops of the 12th International Conference on Educational Data Mining (EDM 2019)", "volume": "2592", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bj\u00f6rn Rudzewitz, Ramon Ziai, Florian Nuxoll, Kor- dula De Kuthy, and Walt Detmar Meurers. 2019. Enhancing a Web-based Language Tutoring Sys- tem with Learning Analytics. In Proceedings of the Workshops of the 12th International Conference on Educational Data Mining (EDM 2019), volume 2592, pages 1-7, Montr\u00e9al, Canada.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Focus on formative feedback", "authors": [ { "first": "Valerie", "middle": [ "J" ], "last": "Shute", "suffix": "" } ], "year": 2008, "venue": "Review of Educational Research", "volume": "78", "issue": "1", "pages": "153--189", "other_ids": { "DOI": [ "10.3102/0034654307313795" ] }, "num": null, "urls": [], "raw_text": "Valerie J. Shute. 2008. Focus on formative feedback. Review of Educational Research, 78(1):153-189.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On Improving the Accuracy of Readability Classification Using Insights from Second Language Acquisition", "authors": [ { "first": "Sowmya", "middle": [], "last": "Vajjala", "suffix": "" }, { "first": "Detmar", "middle": [], "last": "Meurers", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP", "volume": "", "issue": "", "pages": "163--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sowmya Vajjala and Detmar Meurers. 2012. On Im- proving the Accuracy of Readability Classification Using Insights from Second Language Acquisition. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 163- 173, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Towards the Prediction of Semantic Complexity Based on Concept Graphs", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Venant", "suffix": "" }, { "first": "", "middle": [], "last": "Mathieu D'aquin", "suffix": "" } ], "year": 2019, "venue": "12th International Conference on Educational Data Mining (EDM 2019)", "volume": "", "issue": "", "pages": "188--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Venant and Mathieu D'Aquin. 2019. Towards the Prediction of Semantic Complexity Based on Con- cept Graphs. In 12th International Conference on Educational Data Mining (EDM 2019), pages 188- 197, Montreal, Canada.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A New Dataset and Method for Automatically Grading ESOL Texts", "authors": [ { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Medlock", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "180--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A New Dataset and Method for Automatically Grading ESOL Texts. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Vol- ume 1, HLT '11, pages 180-189, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "Writings available for each student Figure 4: Consultation of feedback reports 5 Discussion and perspectives", "type_str": "figure" }, "TABREF1": { "content": "