ACL-OCL / Base_JSON /prefixN /json /nlp4call /2021.nlp4call-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:41:39.278700Z"
},
"title": "DaLAJ -a dataset for linguistic acceptability judgments for Swedish",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Volodina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Yousuf",
"middle": [
"Ali"
],
"last": "Mohammed",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {
"country": "Sweden"
}
},
"email": ""
},
{
"first": "Julia",
"middle": [],
"last": "Klezl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Gothenburg",
"location": {
"country": "Sweden"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present DaLAJ 1.0, a Dataset for Linguistic Acceptability Judgments for Swedish, comprising 9 596 sentences in its first version. DaLAJ is based on the SweLL second language learner data (Volodina et al., 2019), consisting of essays at different levels of proficiency. To make sure the dataset can be freely available despite the GDPR regulations, we have sentence-scrambled learner essays and removed part of the metadata about learners, keeping for each sentence only information about the mother tongue and the level of the course where the essay has been written. We use the normalized version of learner language as the basis for DaLAJ sentences, and keep only one error per sentence. We repeat the same sentence for each individual correction tag used in the sentence. For DaLAJ 1.0 four error categories of 35 available in SweLL are used, all connected to lexical or wordbuilding choices. The dataset is included in the SwedishGlue benchmark. 1 Below, we describe the format of the dataset, our insights and motivation for the chosen approach to data sharing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grammatical and linguistic acceptability is an extensive area of research that has been studied for generations by theoretical linguists (e.g. Chomsky, 1957) , and lately by cognitive and compu-1 SwedishGlue (Swe. SuperLim) is a collection of datasets for training and/or evaluating language models for a range of Natural Language Understanding (NLU) tasks.",
"cite_spans": [
{
"start": 143,
"end": 157,
"text": "Chomsky, 1957)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Licence details: http://creativecommons.org/licenses/by/4.0/. tational linguists (e.g. Keller, 2000; Lau et al., 2020; Warstadt et al., 2019) . Acceptability of sentences is defined as \"the extent to which a sentence is permissible or acceptable to native speakers of the language.\" (Lau et al., 2015 (Lau et al., , p.1618 , and there have been different approaches to studying it. Most work views acceptability as a binary phenomenon: the sentence is either acceptable/ grammatical or not (e.g. Warstadt et al., 2019) . Lau et al. (2014) show that the phenomenon is in fact gradient and is dependent on a larger context than just one sentence. While most experiments are theoretically-driven, the practical value of this research has been also underlined, especially with respect to language learning and error detection (Wagner et al., 2009; Heilman et al., 2014; Daudaravicius et al., 2016) .",
"cite_spans": [
{
"start": 87,
"end": 100,
"text": "Keller, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 101,
"end": 118,
"text": "Lau et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 119,
"end": 141,
"text": "Warstadt et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 283,
"end": 300,
"text": "(Lau et al., 2015",
"ref_id": "BIBREF10"
},
{
"start": 301,
"end": 322,
"text": "(Lau et al., , p.1618",
"ref_id": null
},
{
"start": 496,
"end": 518,
"text": "Warstadt et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 521,
"end": 538,
"text": "Lau et al. (2014)",
"ref_id": "BIBREF9"
},
{
"start": 822,
"end": 843,
"text": "(Wagner et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 844,
"end": 865,
"text": "Heilman et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 866,
"end": 893,
"text": "Daudaravicius et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Datasets for acceptability judgments require linguistic samples that are unacceptable, which requires a source of so-called negative examples. Previously, such samples have been either manually constructed, artificially generated through machine translation (Lau et al., 2020) , prepared by automatically distorting acceptable samples e.g. by deleting or inserting words or inflections (Wagner et al., 2009) or collected from theoretical linguistics books (Warstadt et al., 2019) . Using samples produced by language learners has not been mentioned in connection to acceptability and grammaticality studies. However, there are obvious benefits of getting authentic errors that automatic systems may meet in real-life. Another benefit of reusing samples from learner corpora is that they often contain not only corrections, but also labels describing the corrections. The major benefit, though, is that (un)acceptability judgments come from experts, i.e. teachers, assessors or trained assistants, and are therefore reliable. ",
"cite_spans": [
{
"start": 258,
"end": 276,
"text": "(Lau et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 386,
"end": 407,
"text": "(Wagner et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 456,
"end": 479,
"text": "(Warstadt et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use the error-annotated learner corpus SweLL (Volodina et al., 2019) as a source of \"unacceptable\" sentences and select sentences containing corrections of the type that is of relevance to the SwedishGlue benchmark 2 (Adesam et al., 2020 ).",
"cite_spans": [
{
"start": 48,
"end": 71,
"text": "(Volodina et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 220,
"end": 240,
"text": "(Adesam et al., 2020",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset description",
"sec_num": "2"
},
{
"text": "In the current version, four lexical error types are included into the DaLAJ dataset (see Section 2.2). The resulting dataset contains 4 798 sentence pairs (correct-incorrect), where the two sentences in each sentence pair are identical to each other except for one error. In total, DaLAJ 1.0 contains 9 596 sentences (which is a sum of unacceptable sentences and their corrected \"twin\" sentences). To compare, Lau et al. (2014) use a dataset of 2 500 sentences and Warstadt et al. (2019) have about 10 700 sentences for a similar task. We have a possibility to extend the DaLAJ dataset by other correction types (spelling, morphological or syntactical) in future versions. The full SweLL dataset contains 29 285 correction tags, of which 25 878 may become relevant for the current task (omitting punctuation, consequence and unintelligibility correction tags).",
"cite_spans": [
{
"start": 466,
"end": 488,
"text": "Warstadt et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset description",
"sec_num": "2"
},
{
"text": "The SweLL data (Volodina et al., 2019) has been collected over four years (2017-2020) from adult learners of Swedish from formal educational set-tings, such as courses and tests. The collection contains about 680 pseudonymized essays in total, with 502 of those manually normalized (i.e. rewritten to standard Swedish) and annotated for the nature of the correction (aka error annotation). Table 2 shows the statistics over SweLL in number of essays and correction tags per level. Levels of the sentences correspond to the level of the course that learners were taking when they wrote essays. The essays represent several levels, namely:",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "(Volodina et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The source corpus",
"sec_num": "2.1"
},
{
"text": "A -beginner level B -intermediate level C -advanced level",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The source corpus",
"sec_num": "2.1"
},
{
"text": "The data is saved in two versions: the original and the normalized, with correction labels assigned to the links between the two versions. The 502 corr-annotated essays contain 29 285 corrections distributed over 35 correction tags, as listed in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The source corpus",
"sec_num": "2.1"
},
{
"text": "The linguistic acceptability task in the SwedishGlue is described as a natural language understanding (NLU) task conceptualized as binary judgments from a perspective relevant for research on language learning, language planning etc. (Adesam et al., 2020) . Semantic aspects of the sentence are the main focus of this task. This deviates from the type of language included into the CoLA dataset available through GLUE (Warstadt et al., 2019) , where also morphological and syntactic violations are included. In DaLAJ 1.0, we have selected four correction types from the SweLL corpus that would maximally correspond to the need of semantic interpretation of the context, namely L-W, L-Der, L-FL, O-Comp (Rudebeck and Sundberg, 2020), described below. L-W: Wrong word or phrase. The L-W tag represents the correction category wrong word or phrase. It is used when a word or phrase in the original text has been replaced by another word or Note the Engligh influence on the use of the word * busiga to convey the meaning that someone is * busy (Swe upptagen), the Swedish word busig meaning mischievous, naughty. L-Der: Word formation. The L-Der tag represents the correction category deviant word formation. It is used for corrections of the internal morphological structure of word stems, both with regard to compounding and to derivation.",
"cite_spans": [
{
"start": 234,
"end": 255,
"text": "(Adesam et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 418,
"end": 441,
"text": "(Warstadt et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of (un)grammatical sentences",
"sec_num": "2.2"
},
{
"text": "The L-Der tag is exclusively used for links between one-word units (not necessarily one-token units, since a word may mistakenly be written as two tokens), where the normalized word has kept at least one root morpheme from the original word, but where another morpheme has been removed, added, exchanged or had its form altered. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of (un)grammatical sentences",
"sec_num": "2.2"
},
{
"text": "De \u00e4r * stressiga p\u00e5 grund av studier \u2192 De \u00e4r stressade p\u00e5 grund av studier which may be translated as They are * stressy because of the studies \u2192 They are stressed because of the studies Note that * stressiga uses an existing derivation affix -ig(a), which is wrong in this context, instead of the correct suffix -ade, stressade. L-FL: Foreign word corrected to Swedish. The L-FL tag is used for words from a foreign (non-Swedish) language which have been corrected to a Swedish word. It may also be applied to words which have certain non-Swedish traits due to influence from a foreign language. For example, The O-Comp tag is used for corrections which involve the removal of a space between two words which have been interpreted as making up a compound in the normalized text version, or, more rarely, the adding of a space between two words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of (un)grammatical sentences",
"sec_num": "2.2"
},
{
"text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning NLP4CALL 2021It may also be used for corrections regarding the use of hyphens in compounds. Some examples, Jag k\u00e4nde mig * j\u00e4tte * konstig \u2192 Jag k\u00e4nde mig j\u00e4ttekonstig English: I felt very strange Distribution of the correction tags in the DaLAJ 1.0 dataset is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 381,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selection of (un)grammatical sentences",
"sec_num": "2.2"
},
{
"text": "The task of linguistic acceptability judgments is traditionally performed on the sentence level, where each sentence includes maximum one deviation. In real life learner-written sentences may contain several errors, but it has been shown that training algorithms on samples with focus on one error only produces better results than when mixing several errors in one sentence; extending the context to a paragraph may further improve the results (Katinskaia and Yangarber, 2021) . Paragraphs in learner data, however, are not predictable or well defined, and on several occasions in the SweLL data entire essays consist of one paragraph only. Including in the DaLAJ dataset full paragraphs, in certain cases equivalent to full essays, entails risks of revealing author identities through indications of author-related events or other identifiers despite our meticulous work on pseudonymization of essays (Volodina et al., 2020; Megyesi et al., 2018) . We assess, therefore, that we have no possibility to include paragraphs into the dataset due to the restrictions imposed by the GDPR, so we follow the generally accepted standard of single sentences with single deviations.",
"cite_spans": [
{
"start": 445,
"end": 477,
"text": "(Katinskaia and Yangarber, 2021)",
"ref_id": "BIBREF6"
},
{
"start": 903,
"end": 926,
"text": "(Volodina et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 927,
"end": 948,
"text": "Megyesi et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data format",
"sec_num": "2.3"
},
{
"text": "For each correction label used in the corpus data, we take the corrected target sentence and preserve only one erroneous segment in it to make it \"unacceptable\". This means that the same sentence can be repeated several times in the dataset, with different segments/deviations being in the focus. Positive samples are represented by the corrected sentences. We have data in a tab separated file format, with eight columns, namely: 1. Original (i.e. unacceptable) sentence, e.g. Figure 1 shows an excerpt from the dataset. Note that some of the sentences in the \"Corrected sentence\" column are repeated more than once. The corresponding original sentences contain a new error focus each time. The dataset is (by default) balanced with respect to the number of correct and incorrect samples, however, correct samples contain a number of duplicates which should be complimented by a corresponding number of unique correct sentences, which is something we will add in the next release of the dataset. The dataset is not equally balanced as far as number of sentences per level or per correction code are concerned, which is a more challenging problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 478,
"end": 486,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data format",
"sec_num": "2.3"
},
{
"text": "CoLA dataset authors have explicitly tested that the vocabulary used in their dataset belongs to the 100 000 most frequent words in the language (Warstadt et al., 2019) . In the case of DaLAJ, we have not done any such investigation since we believe that the vocabulary used by second language learners cannot be so advanced as to be outside the 100K most frequent words.",
"cite_spans": [
{
"start": 145,
"end": 168,
"text": "(Warstadt et al., 2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data format",
"sec_num": "2.3"
},
{
"text": "Initial experiments on the dataset, data splits and first baselines are reported in an extended version of this article, available at arXiv.org. The DaLAJ 1.0 dataset is freely available at the SwedishGlue webpage. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data format",
"sec_num": "2.3"
},
{
"text": "We see multiple advantages to use the proposed format for L2 data. Apart from a potential to share the data with wider community of researchers, it also (1) helps expand the data (each original sentence potentially generating several sentences) and (2) helps focus on one error only, facilitating finegrained analysis of model performance as well as human evaluation of model predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First analysis",
"sec_num": "3"
},
{
"text": "Our analysis has suggested, that the DaLAJ 1.0 dataset needs to be cleaned in several ways. First, the SweLL corpus contains a number of essays where learners add reference lists by the end of essays. Naturally, punctuation in reference lists is non-standard, among others not always containing full stop which sabbotages sentence segmentation. Besides, references are syntactically elliptical and do not fit into the standard language. We would need to clean the dataset of all such sentences to ensure more objective training and testing. Second, some sentences contain \"hanging\" titles or e-mail headers. Those hanging elements have not been separated by a full stop in the original essays, and have been prefixed to the next following sentence, which, again, can interfere with model training, e.g. Yet another observed weakness of the DaLAJ 1.0 dataset, is that the positive sentences are repetitive. Since the models need to be trained on unique samples, we plan to exchange the non-unique ones with other sentences. Luckily, positive samples are easier to find than negative ones. We plan to use a corpus of L2 coursebooks graded for levels of proficiency, COCTAILL (Volodina et al., 2014) , to replace duplicate sentences with the ones of equivalent level, and as far as possible, having similar linguistic features and length. Another potential source of in-domain positive sentences are SweLL sentences that do not contain any correction tags. However, such sentences are not many, and we would still need to use COCTAILL sentences or some other correct sentences.",
"cite_spans": [
{
"start": 1173,
"end": 1196,
"text": "(Volodina et al., 2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "First analysis",
"sec_num": "3"
},
{
"text": "The described changes will be introduced in DaLAJ 1.1 and in the test test for DaLAJ 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First analysis",
"sec_num": "3"
},
{
"text": "Finally, there is an important difference between the type of sentences used in CoLA and DaLAJ datasets. CoLA sentences are constructed manually for linguistic course books exemplifying various theoretically important linguistic features, and do not require wider context to interpret; whereas DaLAJ sentences are torn out of their natural context, and contain anaphoric references and elliptical structures. However, the applied value of training (machine learning) algorithms on DaLAJ sentences is higher than CoLA sentences (as we imagine that) since such models can be used in language learning context for writing support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First analysis",
"sec_num": "3"
},
{
"text": "Datasets and corpora collected from (second) language learners contain private information represented both on the metadata level and -depending on the topic -in the texts. Presence of personal information makes those datasets non-trivial to share with the public in a FAIR 4 way (Frey et al., 2020; Volodina et al., 2020) , to say nothing of a potential to use such data for shared tasks. This is rather unfortunate since collection and preparation of such corpora is an extremely time-consuming and expensive process. Language learner datasets can seldom boast big sizes appropriate for training data-greedy machine learning algorithms, and could therefore benefit from aggregating data from several sources -provided they are accessible. Access to such data, besides, ensures transparency of the research and stimulates its fast development (MacWhinney, 2017; Marsden et al., 2018) .",
"cite_spans": [
{
"start": 280,
"end": 299,
"text": "(Frey et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 300,
"end": 322,
"text": "Volodina et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 844,
"end": 862,
"text": "(MacWhinney, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 863,
"end": 884,
"text": "Marsden et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "As data owners, we have to face two contradictory forces: one requiring open sharing, and the other preventing it. Among advocates for sharing data openly we see \u2022 national and international funding agencies, e.g. Swedish Research Council 5 or European Commission 6 , requiring guarantees from grant holders that any produced data will be made available for other researchers, \u2022 national and international infrastructures, e.g. Clarin 7 or SLABank, 8 and \u2022 updated journal policies (e.g. The Modern Language Journal). 9 On the more restrictive side, we have national Ethical Review Authorities 10 and the General Data Protection Regulation, GDPR (Commission, 2016), described shortly below. The Swedish Ethical Review Authority currently requires that we keep the original data (e.g. hand-written/ non-transcribed/ nonpseudonymized essays) for ten years after the project end so that researchers, who may question the trustworthiness of the original data handling, can require access to the original data for inspection. This means that the data owners need to keep mappings between learner names and their corpus IDs to make it possible to link de-identified and pseudonymized essays to their original versions. General Data Protection Regulation sets certain limitations on the data where personal data occurs, among others: \u2022 learner identities should be protected, e.g. pseudonymized or de-identified;",
"cite_spans": [
{
"start": 518,
"end": 519,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "\u2022 data need to be removed if any of the data providers (=learners) requests that;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "\u2022 users that are granted access to the data should have affiliation inside Europe; and \u2022 questions that users can work with are limited to the ones stated in the consent forms, in the case of SweLL encompassing research on and didactic applications for language learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "To meet these requirements, data owners need to administer data access through an application form, where applicants have to be asked about their geographical location and research questions, and need to be informed about the limitations of spreading data to unauthorized users, etc. Users outside Europe can file an application to the university lawyers who have to consider them on a case-to-case basis. The GDPR applies to the data as long as a mapping of learner names with their corpus IDs (as required by the Ethical Review Authorities) is not destroyed. At a certain point of time (currently 10 years) the mapping key will be destroyed and the data will no longer be under the GDPR protection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "In both cases, a 10-year quarantine is obligatory. The restrictions above do not seem to hamper most of the potential EU-based researchers from getting access to the data in its entirety, especially researchers working with qualitative analysis of the data inside a limited project group, e.g. Second Language Acquisition researchers or researchers on language assessment. However, when it comes to the NLP field, the most effective way to stimulate research is to organize shared tasks or provide access to testing and evaluation datasets without any extra administration, as it is, for example, done in the GLUE 11 and SuperGLUE 12 benchmarks (Wang et al., 2018 (Wang et al., , 2019 .",
"cite_spans": [
{
"start": 645,
"end": 663,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF20"
},
{
"start": 664,
"end": 684,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "From the above it follows that data owners need to keep a promise to the funding agencies to make the data open, and at the same time, to follow the legislation and keep the data locked within Europe and only for research questions dealing with language learning. Being representatives of a \"trapped researcher\" group, we have been considering how to make learner data available for a wider audience. For a range of NLP tasks we suggest, thus, sharing L2 data in a sentence scrambled way with limited amount of socio-demographic metadata, for example for error detection & correction tasks. The DaLAJ dataset is a proof-ofconcept attempt in this direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "Ultimately, the education NLP community working with L2 datasets would win by setting up a benchmark with available (multilingual) datasets in the same way as GLUE benchmark is doing for Natural Language Understanding (NLU) tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reflections on access to learner data",
"sec_num": "4"
},
{
"text": "We have presented a new dataset for Swedish which can be used for a variety of tasks in Natural Language Processing (NLP) or Second Language Acquisition (SLA) contexts. We see our contributions both with regards to the dataset, as well as with suggesting a format for L2 datasets that may allow sharing learner data more openly in the GDPR age.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "In the near future, we will test binary linguistic acceptability classification on the current selection of correction categories, and on the full SweLL dataset (all correction tags), per error category and level, establishing baselines for this task on this dataset. We plan to correlate the classification results with correction categories, levels and L1s. Further, we plan to apply models, trained on DaLAJ, to real learner data containing multiple errors per sentence, to assess the effect of data manipulation (i.e. original essays > DaLAJ format) on algorithm training. Proofreading the dataset and addressing identified weaknesses and errors is another direction for the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "In some more distant future we would like to organize shared tasks using DaLAJ. Apart from binary classification for linguistic acceptability judgments, we see a potential of using DaLAJ dataset (in extended version to cover the full correction tagset) for a range of other tasks, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "\u2022 error detection (identification of error location) \u2022 error classification (labeling for error type) \u2022 error correction (generating correction suggestions) \u2022 first language identification (given samples written by learners, to identify their mother tongues) \u2022 classification of sentences by the level of proficiency of its writers, and other potential tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "5"
},
{
"text": "SwedishGlue is a collection of datasets for training and/or evaluating language models for a range of Natural Language Understanding (NLU) tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spraakbanken.gu.se/en/resources/swedishglue Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "FAIR: Findable, Accessible, Interoperable, Reusable (Wilkinson et al., 2016) 5 https://www.vr.se/english/mandates/open-science/openaccess-to-research-data.html 6 https://ec.europa.eu/info/research-andinnovation/strategy/goals-research-and-innovationpolicy/open-science/open-access_en 7 https://www.clarin.eu/ 8 https://slabank.talkbank.org/ 9 https://onlinelibrary.wiley.com/journal/15404781 10 https://www.government.se/government-agencies/theswedish-ethics-review-authority-etikprovningsmyndigheten/ Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gluebenchmark.com/ 12 https://super.gluebenchmark.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by Nationella Spr\u00e5kbanken -jointly funded by its 10 partner institutions and the Swedish Research Council (dnr 2017-00626), as well as partly supported by a grant from the Swedish Riksbankens Jubileumsfond (SweLL -research infrastructure for Swedish as a second language, dnr IN16-0464:1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Appendix A. Overview of all correction types in the source corpus ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendices",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SwedishGLUE-Towards a Swedish Test Set for Evaluating Natural Language Understanding Models. Research Reports from the Department of Swedish",
"authors": [
{
"first": "Yvonne",
"middle": [],
"last": "Adesam",
"suffix": ""
},
{
"first": "Aleksandrs",
"middle": [],
"last": "Berdicevskis",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Morger",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvonne Adesam, Aleksandrs Berdicevskis, and Felix Morger. 2020. SwedishGLUE-Towards a Swedish Test Set for Evaluating Natural Language Under- standing Models. Research Reports from the De- partment of Swedish. GU-ISS-2020-04.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Syntactic structures. The Hague: Mouton",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1957. Syntactic structures. The Hague: Mouton.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "General data protection regulation",
"authors": [],
"year": 2016,
"venue": "Official Journal of the European Union",
"volume": "59",
"issue": "",
"pages": "1--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "European Commission. 2016. General data protection regulation. Official Journal of the European Union, 59, 1-88.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A report on the automatic evaluation of scientific writing shared task",
"authors": [
{
"first": "Vidas",
"middle": [],
"last": "Daudaravicius",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Rafael",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Banchs",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Napoles",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "53--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidas Daudaravicius, Rafael E Banchs, Elena Volo- dina, and Courtney Napoles. 2016. A report on the automatic evaluation of scientific writing shared task. In Proceedings of the 11th Workshop on Inno- vative Use of NLP for Building Educational Appli- cations, pages 53-62.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Creating a learner corpus infrastructure: Experiences from making learner corpora available",
"authors": [
{
"first": "Jennifer-Carmen",
"middle": [],
"last": "Frey",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "K\u00f6nig",
"suffix": ""
},
{
"first": "Darja",
"middle": [],
"last": "Fi\u0161er",
"suffix": ""
}
],
"year": 2020,
"venue": "ITM Web of Conferences",
"volume": "33",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer-Carmen Frey, Alexander K\u00f6nig, and Darja Fi\u0161er. 2020. Creating a learner corpus infrastructure: Experiences from making learner corpora available. In ITM Web of Conferences, volume 33, page 03006. EDP Sciences.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Predicting grammaticality on an ordinal scale",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mulholland",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "174--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 174-180.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Assessing Grammatical Correctness in Language Learning",
"authors": [
{
"first": "Anisia",
"middle": [],
"last": "Katinskaia",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anisia Katinskaia and Roman Yangarber. 2021. As- sessing Grammatical Correctness in Language Learning. In Proceedings of the Sixteenth Workshop on Innovative Use of NLP for Building Educational Applications.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gradience in grammar: Experimental and computational aspects of degrees of grammaticality",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Keller. 2000. Gradience in grammar: Exper- imental and computational aspects of degrees of grammaticality. Ph.D. thesis.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Armendariz",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Lappin",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "8",
"issue": "",
"pages": "296--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Carlos Armendariz, Shalom Lappin, Matthew Purver, and Chang Shu. 2020. How Fu- riously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context. Transactions of the Asso- ciation for Computational Linguistics, 8:296-310.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measuring gradience in speakers' grammaticality judgements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2014. Measuring gradience in speakers' gram- maticality judgements. In Proceedings of the An- nual Meeting of the Cognitive Science Society, vol- ume 36.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised prediction of acceptability judgements",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lap",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1618--1628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lap- pin. 2015. Unsupervised prediction of acceptabil- ity judgements. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1618-1628.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A shared platform for studying second language acquisition",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Macwhinney",
"suffix": ""
}
],
"year": 2017,
"venue": "Language Learning",
"volume": "67",
"issue": "S1",
"pages": "254--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian MacWhinney. 2017. A shared platform for studying second language acquisition. Language Learning, 67(S1):254-275.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data, open science, and methodological reform in second language acquisition research. Critical reflections on data in second language acquisition",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Marsden",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Plonsky",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gudmestad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "51",
"issue": "",
"pages": "219--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Marsden, Luke Plonsky, A Gudmestad, and A Edmonds. 2018. Data, open science, and method- ological reform in second language acquisition re- search. Critical reflections on data in second lan- guage acquisition, 51:219-228.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learner corpus anonymization in the age of gdpr: Insights from the creation of a learner corpus of swedish",
"authors": [
{
"first": "Be\u00e9ta",
"middle": [],
"last": "Megyesi",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Granstedt",
"suffix": ""
},
{
"first": "Sofia",
"middle": [],
"last": "Johansson",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Prentice",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
},
{
"first": "Carl-Johan",
"middle": [],
"last": "Schenstr\u00f6m",
"suffix": ""
},
{
"first": "Gunl\u00f6g",
"middle": [],
"last": "Sundberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 7th Workshop on NLP for Computer Assisted Language Learning",
"volume": "152",
"issue": "",
"pages": "47--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Be\u00e9ta Megyesi, Lena Granstedt, Sofia Johansson, Ju- lia Prentice, Dan Ros\u00e9n, Carl-Johan Schenstr\u00f6m, Gunl\u00f6g Sundberg, Mats Wir\u00e9n, and Elena Volod- ina. 2018. Learner corpus anonymization in the age of gdpr: Insights from the creation of a learner cor- pus of swedish. In Proceedings of the 7th Workshop on NLP for Computer Assisted Language Learning (NLP4CALL 2018) at SLTC, Stockholm, 7th Novem- ber 2018, 152, pages 47-56. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Correction annotation guidelines. SweLL project",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Rudebeck",
"suffix": ""
},
{
"first": "Gunl\u00f6g",
"middle": [],
"last": "Sundberg",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Rudebeck and Gunl\u00f6g Sundberg. 2020. Correc- tion annotation guidelines. SweLL project. Techni- cal report.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The SweLL Language Learner Corpus: From Design to Annotation",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Granstedt",
"suffix": ""
},
{
"first": "Arild",
"middle": [],
"last": "Matsson",
"suffix": ""
},
{
"first": "Be\u00e1ta",
"middle": [],
"last": "Megyesi",
"suffix": ""
},
{
"first": "Ildik\u00f3",
"middle": [],
"last": "Pil\u00e1n",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Prentice",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Ros\u00e9n",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Rudebeck",
"suffix": ""
},
{
"first": "Carl-Johan",
"middle": [],
"last": "Schenstr\u00f6m",
"suffix": ""
},
{
"first": "Gunl\u00f6g",
"middle": [],
"last": "Sundberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Northern European Journal of Language Technology",
"volume": "6",
"issue": "",
"pages": "67--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Volodina, Lena Granstedt, Arild Matsson, Be\u00e1ta Megyesi, Ildik\u00f3 Pil\u00e1n, Julia Prentice, Dan Ros\u00e9n, Lisa Rudebeck, Carl-Johan Schenstr\u00f6m, Gunl\u00f6g Sundberg, et al. 2019. The SweLL Language Learner Corpus: From Design to Annotation. Northern European Journal of Language Technol- ogy, 6:67-104.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards privacy by design in learner corpora research: A case of on-the-fly pseudonymization of swedish learner essays",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Yousuf",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Mohammed",
"suffix": ""
},
{
"first": "Arild",
"middle": [],
"last": "Derbring",
"suffix": ""
},
{
"first": "Be\u00e1ta",
"middle": [],
"last": "Matsson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Megyesi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "357--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Volodina, Yousuf Ali Mohammed, Sandra Der- bring, Arild Matsson, and Be\u00e1ta Megyesi. 2020. To- wards privacy by design in learner corpora research: A case of on-the-fly pseudonymization of swedish learner essays. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 357-369.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "You get what you annotate: a pedagogically annotated corpus of coursebooks for Swedish as a Second Language",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Volodina",
"suffix": ""
},
{
"first": "Ildik\u00f3",
"middle": [],
"last": "Pil\u00e1n",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Stian R\u00f8dven Eide",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Heidarsson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the third workshop on NLP for computer-assisted language learning",
"volume": "",
"issue": "",
"pages": "128--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Volodina, Ildik\u00f3 Pil\u00e1n, Stian R\u00f8dven Eide, and Hannes Heidarsson. 2014. You get what you anno- tate: a pedagogically annotated corpus of course- books for Swedish as a Second Language. In Proceedings of the third workshop on NLP for computer-assisted language learning, pages 128- 144.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Judging grammaticality: Experiments in sentence classification",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2009,
"venue": "Calico Journal",
"volume": "26",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Wagner, Jennifer Foster, Josef van Genabith, et al. 2009. Judging grammaticality: Experiments in sentence classification. Calico Journal, 26(3):474-",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.00537"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Super- glue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.07461"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. arXiv preprint arXiv:1804.07461.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural network acceptability judgments. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "7",
"issue": "",
"pages": "625--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2019. Neural network acceptability judg- ments. Transactions of the Association for Compu- tational Linguistics, 7:625-641.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The FAIR Guiding Principles for scientific data management and stewardship. Scientific data",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Mark D Wilkinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumontier",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Ijsbrand",
"suffix": ""
},
{
"first": "Gabrielle",
"middle": [],
"last": "Aalbersberg",
"suffix": ""
},
{
"first": "Myles",
"middle": [],
"last": "Appleton",
"suffix": ""
},
{
"first": "; -Willem",
"middle": [],
"last": "Axton",
"suffix": ""
},
{
"first": "Luiz",
"middle": [],
"last": "Boiten",
"suffix": ""
},
{
"first": "Silva",
"middle": [],
"last": "Bonino Da",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"E"
],
"last": "Santos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bourne",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning",
"volume": "3",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E Bourne, et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3(1):1-9. Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "An excerpt from the dataset phrase in the normalized version. It is placed on units which are exchanged rather than corrected. For example, Alla blir * busiga med sociala medier \u2192 Alla blir upptagna med sociala medier which may be verbatim translated as Everyone is * naughty with social media \u2192 Everyone is busy with social media",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Jag och min * family \u2192 Jag och min familj English: I and my family O-Comp: Spaces and hyphens between words.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "(Swe) En B-institution-entusiast Hej Segerstad kommun ! > (Eng) A B-institute-enthusiast Hi Segerstad municipality !",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Overview of all correction types in the SweLL corpus, part 2 Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021)",
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "Statistics over the SweLL data",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"text": "Men pengarna \u00e4r inte * alls (Eng. But money is not *at all) 2. Corrected sentence, e.g. Men pengarna \u00e4r inte allt (Eng. But money is not everything) 3. Error string indices, e.g. 21-24 4. Correct string indices, e.g. 21-24 5. Error-correction pair, e.g. alls-allt 6. Error label, e.g. L-W 7. Mother tongue(s) (L1), e.g. Somali 8. Approximate level, e.g.",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}