{ "paper_id": "C18-1021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:09:27.719227Z" }, "title": "Lexi: A tool for adaptive, personalized text simplification", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Copenhagen", "location": { "country": "Denmark" } }, "email": "bingel@di.ku.dk" }, { "first": "Gustavo", "middle": [ "H" ], "last": "Paetzold", "suffix": "", "affiliation": { "laboratory": "", "institution": "Federal University of Technology", "location": {} }, "email": "ghpaetzold@utfpr.edu.br" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Copenhagen", "location": { "country": "Denmark" } }, "email": "soegaard@di.ku.dk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Most previous research in text simplification has aimed to develop generic solutions, assuming very homogeneous target audiences with consistent intra-group simplification needs. We argue that this assumption does not hold, and that instead we need to develop simplification systems that adapt to the individual needs of specific users. As a first step towards personalized simplification, we propose a framework for adaptive lexical simplification and introduce Lexi, a free open-source and easily extensible tool for adaptive, personalized text simplification. Lexi is easily installed as a browser extension, enabling easy access to the service for its users.", "pdf_parse": { "paper_id": "C18-1021", "_pdf_hash": "", "abstract": [ { "text": "Most previous research in text simplification has aimed to develop generic solutions, assuming very homogeneous target audiences with consistent intra-group simplification needs. We argue that this assumption does not hold, and that instead we need to develop simplification systems that adapt to the individual needs of specific users. As a first step towards personalized simplification, we propose a framework for adaptive lexical simplification and introduce Lexi, a free open-source and easily extensible tool for adaptive, personalized text simplification. Lexi is easily installed as a browser extension, enabling easy access to the service for its users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many a research paper on text simplification starts out by sketching the problem of text simplification as rewriting a text such that it becomes easier to read, changing or removing as little of its informational content as possible (Zhu et al., 2010; Coster and Kauchak, 2011; De Belder and Moens, 2010; Paetzold and Specia, 2015; Bingel and S\u00f8gaard, 2016) . Such a statement may describe the essence of simplification as a research task, but it hides the fact that it is not always easy to decide what is easy for a particular user. This paper discusses why we need custom-tailored simplifications for individual users, and argues that previous research on non-adaptive text simplification has been too generic to unfold the full potential of text simplification.", "cite_spans": [ { "start": 233, "end": 251, "text": "(Zhu et al., 2010;", "ref_id": "BIBREF33" }, { "start": 252, "end": 277, "text": "Coster and Kauchak, 2011;", "ref_id": "BIBREF8" }, { "start": 278, "end": 304, "text": "De Belder and Moens, 2010;", "ref_id": "BIBREF9" }, { "start": 305, "end": 331, "text": "Paetzold and Specia, 2015;", "ref_id": "BIBREF18" }, { "start": 332, "end": 357, "text": "Bingel and S\u00f8gaard, 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Even when limiting ourselves to lexical substitution, i.e. the task of reducing the complexity of a document by replacing difficult words with easier-to-read synonyms, we see plenty of evidence that, for instance, dyslexics are highly individual in what material is deemed easy and complex (Ziegler et al., 2008) . Lexi, which we introduce in this paper, is a free, open-source and easily extensible tool for adaptively learning what items specific users find difficult, using this information to provide better (lexical) simplification. Our system initially serves Danish, but is easily extended to further languages. For surveys of text simplification, including resources across languages, see Siddharthan (2014) , Shardlow (2014b) and Collins-Thompson (2014).", "cite_spans": [ { "start": 290, "end": 312, "text": "(Ziegler et al., 2008)", "ref_id": "BIBREF34" }, { "start": 697, "end": 715, "text": "Siddharthan (2014)", "ref_id": "BIBREF28" }, { "start": 718, "end": 734, "text": "Shardlow (2014b)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Text simplification is a diverse task, or perhaps rather a family of tasks, with a number of different target audiences that different papers and research projects have focused on. Among the most prominent target audiences are foreign language learners, for whom various approaches to simplifying text have been pursued, often focusing on lexical (Tweissi, 1998) but also sentence-level simplification (Liu and Matsumoto, 2016) . Other notable groups that have been specifically targeted in text simplification research include dyslexics (Rello et al., 2013) , and the aphasic (Carroll et al., 1998) , for whom particularly long words and sentences, but also certain surface forms such as specific character combinations, may pose difficulties. People on the autism spectrum have also been addressed, with the focus lying on reducing the amount of figurative expressions in a text or reducing syntactic complexity (Evans et al., 2014) . Reading beginners (both children and adults) are another group with very particular needs, and text simplification research has tried to provide this group with methods to reduce the amount of high-register language and non-frequent words (De Belder and Moens, 2010) .", "cite_spans": [ { "start": 347, "end": 362, "text": "(Tweissi, 1998)", "ref_id": "BIBREF29" }, { "start": 402, "end": 427, "text": "(Liu and Matsumoto, 2016)", "ref_id": "BIBREF16" }, { "start": 538, "end": 558, "text": "(Rello et al., 2013)", "ref_id": "BIBREF24" }, { "start": 577, "end": 599, "text": "(Carroll et al., 1998)", "ref_id": "BIBREF6" }, { "start": 914, "end": 934, "text": "(Evans et al., 2014)", "ref_id": "BIBREF11" }, { "start": 1180, "end": 1203, "text": "Belder and Moens, 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "There is no one-size-fits-all solution to text simplification", "sec_num": "1.1" }, { "text": "Evidently, each target group has its own simplification needs, and there is considerable variation as to how well the specifics of what makes a text difficult is defined for each group and simplification strategy. While difficult items in a text may be identified more easily and generally for problems such as resolving pronoun reference, questions such as what makes a French word difficult for a native speaker of Japanese, or what dyslexic children consider a difficult character combination or an overly long sentence, are much harder to answer. Nevertheless, there is a vast body of work (Yatskar et al., 2010; Biran et al., 2011; Horn et al., 2014 ) that ventures to build very general-purpose simplification models from simplification corpora such as the Simple English Wikipedia corpus (Coster and Kauchak, 2011) , which has been edited by amateurs without explicit regard to a specific audience, and with rather vague guidelines as to what constitutes difficult or simple language.", "cite_spans": [ { "start": 594, "end": 616, "text": "(Yatskar et al., 2010;", "ref_id": "BIBREF31" }, { "start": 617, "end": 636, "text": "Biran et al., 2011;", "ref_id": "BIBREF5" }, { "start": 637, "end": 654, "text": "Horn et al., 2014", "ref_id": "BIBREF14" }, { "start": 795, "end": 821, "text": "(Coster and Kauchak, 2011)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "There is no one-size-fits-all solution to text simplification", "sec_num": "1.1" }, { "text": "Other work in simplification attempts to answer the above questions by inducing models from specifically compiled datasets, which for instance may have been collected by surveying specific target groups and asking them to indicate difficult material in a text. Yet even those approaches often cannot live up to the real challenges in simplification, seeing that we find very heterogeneous simplification needs also within target groups. Foreign language learners with different linguistic backgrounds (pertaining both to their native and second languages) will find very different aspects of the same foreign language difficult. Young readers in different school grades will quickly advance their reading habits and skills, and also within the same class or age reading levels may differ greatly. Likewise, people with autism exhibit very different manifestations of the type and degree of their condition (Alexander et al., 2016) , also with respect to reading (Evans et al., 2014) , just as there exist many different forms of cognitive impairments affecting literacy, including many different forms of dyslexia (Watson and Goldgar, 1988; Bakker, 1992; Ziegler et al., 2008) . In fact, while there is a relatively strong agreement on the existence of some typologies of dyslexia or autism, specific typologies that have been proposed are heavily debated, such that it would not even be straightforward to create simplification tools for specific subtypes of these conditions.", "cite_spans": [ { "start": 906, "end": 930, "text": "(Alexander et al., 2016)", "ref_id": "BIBREF0" }, { "start": 962, "end": 982, "text": "(Evans et al., 2014)", "ref_id": "BIBREF11" }, { "start": 1114, "end": 1140, "text": "(Watson and Goldgar, 1988;", "ref_id": "BIBREF30" }, { "start": 1141, "end": 1154, "text": "Bakker, 1992;", "ref_id": "BIBREF2" }, { "start": 1155, "end": 1176, "text": "Ziegler et al., 2008)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "There is no one-size-fits-all solution to text simplification", "sec_num": "1.1" }, { "text": "From this it becomes apparent that in order to build simplification systems that truly help specific individuals, those systems have to be personalized or personalizable. Further, due to the frequent lack of insight into what an individual's specific reading problems are (and because any introspection is difficult to verify), such systems need to be able to learn themselves what those individual challenges are, and ultimately adapt to those.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "There is no one-size-fits-all solution to text simplification", "sec_num": "1.1" }, { "text": "In order to learn specific reading challenges for an individual person, a simplification system needs individual data for this person, from which a personalized model can then be induced. This brings up the question of how best to obtain such data. A straightforward approach would be to ask each individual to provide ratings for some number of stimuli as they start using a simplification system. However, this would pose a relatively unnatural reading scenario, which might introduce a certain bias in the data and thus distort the induced model. Further, it might create a dissatisfying user experience, and users might not be willing to invest much time into such a calibration phase, especially when they perceive reading as a particularly strenuous activity. Yet perhaps most importantly, the model will not necessarily be well-adapted to the specific domains and genres that a specific user typically consumes text from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obtaining individual data", "sec_num": "1.2" }, { "text": "As an alternative, we propose to collect data as the system is used, and to continuously update the system with feedback it collects from the user. In this way, the system can base its model on exactly those text types the user consumes. We discuss how feedback can be incorporated into a system in Section 3 and provide details on how this is implemented in our proposed system in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Obtaining individual data", "sec_num": "1.2" }, { "text": "We present Lexi, an open source and easily extensible tool for adaptive, personalized text simplification. Lexi is based on an adaptive framework for lexical simplification that we also describe in this paper. This framework incorporates feedback from users, updating personalized simplification models such as to meet their individual simplification needs. Lexi is made publicly available under a CC-BY-NC license 1 at https://www.readwithlexi.net.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.3" }, { "text": "Perhaps the earliest contribution that focuses on on-demand lexical simplification is the work of Devlin and Unthank (2006) , who present HAPPI, a web platform that allows users to request simplified versions of words, as well as other \"memory jogging\" pieces of information, such as related images.", "cite_spans": [ { "start": 98, "end": 123, "text": "Devlin and Unthank (2006)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another example is the work of Azab et al. (2015) , who present a web platform that allows users to select words they do not comprehend, then presents them with synonyms in order to facilitate comprehension. Notice that their approach does not simplify the selected complex words directly, it simply shows semantically equivalent alternatives that could be within the vocabulary known by the user.", "cite_spans": [ { "start": 31, "end": 49, "text": "Azab et al. (2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The recent work of Paetzold and Specia (2016a) describes Anita, yet another web platform of this kind. It allows users to select complex words and then request a simplified version, related images, synonyms, definitions and translations. Paetzold and Specia (2016a) claim that their approach outputs customized simplifications depending on the user's profile, and evolves as users provide feedback on the output produced. However, they provide no details of the approach they use to do so, nor do they present any results showcasing its effectiveness.", "cite_spans": [ { "start": 19, "end": 46, "text": "Paetzold and Specia (2016a)", "ref_id": "BIBREF19" }, { "start": 238, "end": 265, "text": "Paetzold and Specia (2016a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Therefore not counting Paetzold and Specia (2016a) as work in personalized simplification, we are not aware of any previous approaches that address this. We further refer to related work on specific aspects of text simplification as they become relevant in the course of this paper.", "cite_spans": [ { "start": 23, "end": 50, "text": "Paetzold and Specia (2016a)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As we mapped out in the introduction, we devise a simplification system that continuously learns from user feedback and adapts to the user's simplification needs. This section discusses how such feedback can be incorporated into a lexical simplification model via online learning, and where in the lexical simplification pipeline it is sensible to implement adaptivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive text simplification", "sec_num": "3" }, { "text": "Lexical simplification, i.e. replacing single words with simpler synonyms, classically employs a pipeline approach illustrated in Figure 1 (Shardlow, 2014a; Paetzold and Specia, 2015) . This pipeline consists of a four-step process, the first step of which is to identify simplification targets, i.e. words that the model believes will pose a difficulty for the user. This step is called Complex Word Identification (CWI) and has received a great deal of attention in the community, including two shared tasks (Paetzold and Specia, 2016b; Yimam et al., 2018) . In a second step, known as Substitution Generation, synonyms are retrieved as candidate replacements for the target These are then filtered to match the context, resolving word sense ambiguities or stylistic mismatches, in Substitution Selection. Finally, those filtered candidate are ranked in order of simplicity in what is known as Substitution Ranking (SR).", "cite_spans": [ { "start": 139, "end": 156, "text": "(Shardlow, 2014a;", "ref_id": "BIBREF26" }, { "start": 157, "end": 183, "text": "Paetzold and Specia, 2015)", "ref_id": "BIBREF18" }, { "start": 510, "end": 538, "text": "(Paetzold and Specia, 2016b;", "ref_id": "BIBREF20" }, { "start": 539, "end": 558, "text": "Yimam et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Adaptivity in the lexical simplification pipeline", "sec_num": "3.1" }, { "text": "Out of these four steps, we consider CWI and SR as the most natural ones to make adaptive, whereas generation and selecting candidates can be regarded as relatively independent from a specific user. In order to implement adaptivity, we propose to make use of online learning methods as discussed below and, departing from a seed model, train and maintain user-specific models as we collect feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptivity in the lexical simplification pipeline", "sec_num": "3.1" }, { "text": "Complex Word Identification is usually approached as a binary classification task, where the goal is to decide for some word in context whether or not it poses a difficulty to a reader. Existing datasets, for instance the ones used at previous CWI shared tasks (Paetzold and Specia, 2016b; Yimam et al., 2018) , therefore provide a sentence and a target word (or multi-word expression) together with a binary label.", "cite_spans": [ { "start": 261, "end": 289, "text": "(Paetzold and Specia, 2016b;", "ref_id": "BIBREF20" }, { "start": 290, "end": 309, "text": "Yimam et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptive CWI", "sec_num": "3.1.1" }, { "text": "A model trained on this data with a learning algorithm based on gradient descent on t examples can now easily integrate newly collected data points into its parameters \u03b8 using an update rule such as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive CWI", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 (t+1) = \u03b8 (t) \u2212 \u03b7\u2207 \u03b8 (t) J(\u03b8 (t) ; x, y),", "eq_num": "(1)" } ], "section": "Adaptive CWI", "sec_num": "3.1.1" }, { "text": "where x is a representation of a target word in context and y is a binary complexity label we receive from user feedback. As an alternative to gradient descent based algorithms, we can use other online learning models, e.g. the Perceptron algorithm. CWI datasets are typically not very large (between 2,500 and 5,500 positive examples per dataset in the mentioned shared tasks), such that data points sampled from users can quickly have an impact on a generic base model. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive CWI", "sec_num": "3.1.1" }, { "text": "Substitution Ranking has received relatively little attention in the community compared to CWI. Most lexical simplifiers rank candidates using unsupervised approaches. The earliest example is the approach of Carroll et al. (1998) , who rank candidates according to their Kucera-Francis coefficients, which are calculated based on frequencies extracted from the Brown corpus (Rudell, 1993) . Other unsupervised approaches, such as those of Ligozat et al. (2012) and Glava\u0161 and \u0160tajner (2015) , go a step further and use metrics that incorporate multiple aspects of word complexity, including context-aware features such as n-gram frequencies and language model probabilities. But even though unsupervised rankers perform well in the task, they are incapable of learning from data, which makes them unsuitable for adaptive SR. Our approach to adaptive SR is similar to our approach to adaptive CWI, namely to train an initial model over manually produced simplicity rankings, then continuously update them with new knowledge as Lexi users provide feedback on the simplifications they receive. The feedback in this scenario is composed of a complex word in context, a simplification produced by Lexi, and a binary rank provided by the user determining which word (complex or simplification) makes the sentence easier to understand. For that purpose, we need a supervised model that (i) supports online learning so that it can be efficiently updated after each session, and (ii) can learn from binary ranks.", "cite_spans": [ { "start": 208, "end": 229, "text": "Carroll et al. (1998)", "ref_id": "BIBREF6" }, { "start": 374, "end": 388, "text": "(Rudell, 1993)", "ref_id": "BIBREF25" }, { "start": 439, "end": 460, "text": "Ligozat et al. (2012)", "ref_id": "BIBREF15" }, { "start": 465, "end": 490, "text": "Glava\u0161 and \u0160tajner (2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": "Paetzold and Specia (2017) offer some intuition on how this can be done. They exploit the fact that one can decompose a sequence of elements {e 1 , e 2 , ..., e n } with ranks", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": "{r 1 , r 2 , ..., r n } into a ma- trix m \u2208 R n\u00d7n , such that m(i, j) = f (r i , r j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": ", and function f (r i , r j ) estimates a value that describes the relationship between the ranks of elements e i and e j . For example, f could be described as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": "f (r i , r j ) \uf8f1 \uf8f2 \uf8f3 1 if r i < r j \u22121 if r i > r j 0 otherwise (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": "The ranker of Paetzold and Specia (2017) uses a deep multi-layer perceptron that predicts each value of m individually. It takes as input feature representations of e i and e j , and produces a function f similar to the one depicted in Equation 2. Their approach would be perfectly capable of learning from the feedback produced by Lexi users, but it would be very difficult to train it through online learning, given that deep multi-layer perceptrons are characterized by a large number of parameters that are costly to optimize in an on-demand basis. We instead propose to employ an online learning model that has fewer parameters, e.g. logistic regression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Substitution Ranking", "sec_num": "3.1.2" }, { "text": "Lexi consists of a client-side frontend and a server-side backend that communicate with each other via a RESTful API (Fielding, 2000) , exchanging requests and responses as described further in 4.3. The clientserver architecture allows for easy portability of the software to users, minimizing user-side installation efforts, hardware usage and dependencies on other libraries. It also centralizes the simplification engine, such that amendments to and maintenance of the latter need only be implemented on the server side.", "cite_spans": [ { "start": 117, "end": 133, "text": "(Fielding, 2000)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "4" }, { "text": "Lexi is currently limited to performing lexical simplification. Note, however, that this is merely a limitation of the backend system, which only implements a lexical simplification system for now. From the frontend perspective, however, there are no limitations as to the nature and length of the simplified items in a text, and extending Lexi to support higher-level modes of simplification simply amounts to Figure 1 : Lexical simplification pipeline as identified by Paetzold and Specia (2015) . The simplification workflow consists of identifying simplification targets, i.e. words that pose a challenge to the reader. In the generation step, possible alternatives for each target are retrieved, which are then filtered in the selection step, eliminating words that do not fit the context. In the ranking step, the system finally orders the candidates by simplicity. implementing a backend system supporting this. 3 We initially focus on lexical simplification for a number of reasons: (i) We have existing baseline models that we expect to work well in a real-world setting. (ii) Given a relatively small number of parameters in those models, we expect fast adaptation to individual users from relatively little feedback. (iii) Compared to other forms of simplification, lexical simplification needs to make a selection from a relatively limited search space that is still reasonably diverse, such that we expect personalized models to make a difference more easily.", "cite_spans": [ { "start": 471, "end": 497, "text": "Paetzold and Specia (2015)", "ref_id": "BIBREF18" }, { "start": 919, "end": 920, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 411, "end": 419, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Implementation", "sec_num": "4" }, { "text": "Lexi's frontend is implemented in JavaScript and jQuery under the Mozilla WebExtension framework, supported by most modern browsers. 4 WebExtensions employ content scripts to modify a webpage upon certain specified events, for instance a click on some page element. The remainder of this section describes Lexi's basic usage as the user registers an account and asks the system for simplifications, thereby illustrating the user interface and sketching the inner workings of the frontend.", "cite_spans": [ { "start": 133, "end": 134, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Frontend", "sec_num": "4.1" }, { "text": "Upon installation of the Lexi extension in the browser, the user is prompted to register an account, providing an email address as well as basic demographic information (year of birth and educational level, see Figure 2 ). This information is sent to the backend using its registration endpoint (see Table 1 ). If the user has previously created an account and simply reinstalled the extension, they may also just provide their email address to keep using their existing profile. The user's email address is stored locally in the browser, where it is kept until the browser storage is cleared or the extension is uninstalled.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 300, "end": 308, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "User log-in and registration", "sec_num": "4.1.1" }, { "text": "Whenever the user visits a webpage, the extension injects an event listener into the page, which triggers upon the selection of some text and offers the user to simplify the selected content in the form of a small button that is displayed just above the selection. When this button is clicked, the extension retrieves the user's email address from the browser storage (prompting the user to log in if no email address is stored) and verifies that a user with that email address exists in the backend's database, using the login endpoint as given in Table 1 . The script then submits a simplification request to the backend's simplification endpoint, enclosing a JSON object that contains the user's email address (used by the backend to retrieve the personal simplification model) and the HTML code of the element containing the text selection. See Appendix A.1 for an example. The response from the backend then transmits a JSON object with augmented HTML, where elements with unique IDs are wrapped around simplification targets. The response object further contains an array of simplification objects, each of which in turn contains a list of synonyms ordered by simplicity ranking (including the target). An example is given in Appendix A.2. The content script replaces the original source with the augmented HTML and displays each simplification span with a light green background color (see Figure 4) . The script then shifts through the simplification alternatives for a given target whenever the user clicks on the respective span on the page, advancing one alternative per click and reverting to the first alternative at the end of list. The original item is marked in a slightly but discernibly darker shade than the proposed simplifications.", "cite_spans": [], "ref_spans": [ { "start": 549, "end": 556, "text": "Table 1", "ref_id": null }, { "start": 1404, "end": 1413, "text": "Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Simplification requests and display", "sec_num": "4.1.2" }, { "text": "In order to provide personalized simplifications and to adapt to individual users, Lexi needs to be able to decide which alternative a user prefers over the others for every target. In a classical, controlled annotation setting, one would probably present subjects with a set of alternatives and have them rank these or pick a single favorite. However, as Lexi aims to provide as natural and smooth a reading scenario as possible to its users, explicitly asking for such feedback would critically obstruct the reading process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User feedback", "sec_num": "4.1.3" }, { "text": "Lexi therefore interprets whatever final selection a user makes for some simplification span as their preferred alternative in this context. 5 As the user finally navigates away from the webpage that Lexi was invoked on, Lexi solicits feedback from the user on a five-point scale (see Figure 3 ) and submits this rating along with the simplification objects and their final selections (and click-through counts) to the feedback endpoint of the backend. 6 See Appendix A.3 for an example of the feedback.", "cite_spans": [ { "start": 141, "end": 142, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 285, "end": 293, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "User feedback", "sec_num": "4.1.3" }, { "text": "The frontend design was developed in close collaboration with Nota, the Danish Library and Expertise Center for people with reading disabilities. 7 In February 2018, the software was intensively tested by four dyslexic members of Nota, all female students in secondary/higher education and aged between 20 and 30. Each test started with a short preliminary interview in which the subjects were asked about their age, occupation/study field, reading habits, degree of dyslexia and use of browser extensions. The subjects were then given the possibility to watch an introduction video (of 1:30 min length) outlining Lexi's basic functionality and user interface. Two of the four subjects opted for this, while the other two decided to skip the video as they do not usually watch introduction videos when using new software. Next, the subjects were asked to locate Lexi in the Chrome Webshop, install it in the browser and create a user account. Once set up, each subject navigated to a site of her choice and used Lexi to receive simplifications as outlined in 4.1.2. The two subjects who had not watched the video did so now, and both declared they gained further insight into Lexi's functionality through the video, but that it was not crucial in order to understand its basic usage.", "cite_spans": [ { "start": 146, "end": 147, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation of usability", "sec_num": "4.1.4" }, { "text": "In qualitative interviews directly succeeding each test, the test subjects overall reacted very positively to the prospect of a personalized simplification tool in general, and to Lexi and its design in particular. 8 The test subjects suggested a number of improvements, most of which have now been implemented. One suggested improvement, which we have not been able to implement but intend to do so for a future version, is the support for multilingual simplification. Two subjects said they would greatly appreciate this, as much of their study material is only available in English.", "cite_spans": [ { "start": 215, "end": 216, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Qualitative evaluation of usability", "sec_num": "4.1.4" }, { "text": "Lexi's backend consists of a simplification system, implemented in Python 3.5, and a database that stores user information and their simplification histories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Backend", "sec_num": "4.2" }, { "text": "As stated above, Lexi's simplification system currently focuses on lexical simplification, abiding to the de-facto standard pipeline depicted in Figure 1 . Since Lexi lets users choose which words they wish to have simplified, it does not employ any automatic CWI. 9 Below we sketch Lexi's simplification system as it receives simplification requests from the frontend. As our lexical simplification approach is sensitive to the context of a word, Lexi's first step is to preprocess the HTML source transmitted from the frontend, identifying the boundaries of the sentence that contains the target word, if any. 10 For Substitution Generation, Lexi implements the embeddings-based approach inspired by the contributions of Glava\u0161 and \u0160tajner (2015) and Paetzold and Specia (2016c) . In their work, they extract as candidate substitutions the N words with the highest cosine similarity with a target word. As Danish, the language currently served by Lexi, is not as well-resourced as for example English, Lexi extends the embedding-based Substitution Generation approach by using an ensemble of embeddings models that are trained independently on different text sources, the Danish Wikipedia and a news corpus. 11 The overall similarity score for a target-candidate pair is then defined as the mean score across these embeddings models. Lexi returns the ten most similar candidates whose mean similarity score exceeds some configurable threshold. Alternatively, Lexi allows to generate synonyms from a simple dictionary, in the case of Danish using the Danish WordNet (Pedersen et al., 2009) , yet this approach suffers from severely reduced coverage compared to word embeddings.", "cite_spans": [ { "start": 612, "end": 614, "text": "10", "ref_id": null }, { "start": 723, "end": 748, "text": "Glava\u0161 and \u0160tajner (2015)", "ref_id": "BIBREF13" }, { "start": 753, "end": 780, "text": "Paetzold and Specia (2016c)", "ref_id": "BIBREF21" }, { "start": 1567, "end": 1590, "text": "(Pedersen et al., 2009)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 145, "end": 153, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Simplification system", "sec_num": "4.2.1" }, { "text": "Once generated, the candidates are filtered during Substitution Selection by an unsupervised boundary ranker (Paetzold and Specia, 2016c) . In this approach, a supervised ranker is trained with instances gathered in an unsupervised fashion: we generate candidate substitutions for complex words using our generation approach, then assign label 1 to the complex words and 0 to the generated candidates. The boundary between the two classes is then used to rank and filter candidates. Paetzold and Specia (2016c) show that this is a state-of-the-art approach that outperforms all earlier supervised and unsupervised strategies. Given a target word and a set of generated candidate substitutions, the model ranks the candidates based on how far in the positive side of the data they are, then selects 65% of the highest ranking ones.", "cite_spans": [ { "start": 109, "end": 137, "text": "(Paetzold and Specia, 2016c)", "ref_id": "BIBREF21" }, { "start": 483, "end": 510, "text": "Paetzold and Specia (2016c)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Simplification system", "sec_num": "4.2.1" }, { "text": "Finally, the selected candidates are ranked with a supervised Substitution Ranking model following the approach we outlined in Section 3.1.2. It is during this step that Lexi is capable of producing customized output based on the user's needs, and to evolve based on the user's feedback. Lexi employs a pairwise online logistic regression model that learns to quantify the simplicity difference between two candidate substitutions. Given an unseen set of candidate substitutions, the regressor estimates the simplicity difference between each candidate pair, then ranks all candidates based on their average score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification system", "sec_num": "4.2.1" }, { "text": "Note that the user's feedback, sent by the frontend, consists of a set S and an index i, where S is the full set of suggested synonyms, including the target, and i is the index of the item in S that the user finally selected. As the regressor, however, learns from pairwise rankings, Lexi passes all pairs { S i , S j |j = i} to the regressor, i.e. it pairs the selected item with all others and updates the ranker accordingly, postulating that the selected item is easier for this user than each other suggestion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification system", "sec_num": "4.2.1" }, { "text": "Using a seed dataset of complex-simple word correspondences in context, we train a default model that produces initial simplifications as a user solicits simplifications for the first time. 12 As Lexi receives feedback for this user for the first time, the seed model is copied and personalized with the first batch of feedback, then this model is saved for later requests by this user.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification system", "sec_num": "4.2.1" }, { "text": "Lexi stores user information and simplification histories in a PostgreSQL database. More specifically, it employs three different tables, called users, models and sessions. In the first of these, it links a unique, numerical user ID to a user email address, and stores when the user first and last used Lexi. It further contains the demographic information the user provides at registration, i.e. their year of birth and educational status. The models table stores a path to the serialized personal model for each user ID. Finally, the sessions table stores each simplification request issued to the backend with a unique session ID, the respective user ID, a time stamp for the session start and one for the submission of feedback, the webpage URL, simplification objects serialized as JSON, the provided rating and finally the frontend version number used in this session.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Database", "sec_num": "4.2.2" }, { "text": "Lexi's backend offers a RESTful API implemented in Python 3.5, using the Flask package. 13 The services available through HTTP POST requests, with their URI paths listed in Table 1 . Input and output values are communicated via a JSON-based protocol exemplified in the appendix. Lexi further defines a set of error codes for easier troubleshooting and flexible internationalization of the frontend via the i18n API used by WebExtensions. ", "cite_spans": [ { "start": 88, "end": 90, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 173, "end": 180, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Communication between backend and frontend", "sec_num": "4.3" }, { "text": "Lexi's design does not impose any restrictions on the support of new (written) languages, including right-to-left or non-alphabetic writing systems. In fact, supporting a new language simply amounts to providing a new language-specific simplification pipeline as illustrated in Figure 1 . Depending on the specific implementation of the simplification system, certain resources are however needed to induce a first seed model for simplification. Most centrally, this pertains to Substitution Generation, where a synonym database or good word embeddings are required in the case of lexical simplification, or a reliable paraphrase module in the case of higher-level simplification. With respect to Substitution Ranking, the availability of resources such as simplification corpora is less critical, as simple heuristics (e.g. simplicity proxies such as length and frequency) might give a reasonable baseline upon which the system can then improve through user feedback.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Language support and extensibility", "sec_num": "4.4" }, { "text": "Lexi currently does not offer multilingual support, but is confined to one language per backend instance. Supporting multilingual simplification could be implemented through a language identification module upstream to the set of simplification pipelines, consisting of one pipeline per language. This raises the interesting question whether knowledge about one user's simplification preferences in one language could be transferred to another language. Support for this hypothesis comes, among others, from the cross-lingual track in the recent CWI shared task by Yimam et al. (2018).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language support and extensibility", "sec_num": "4.4" }, { "text": "As any software interacting with users and storing information on them, Lexi is naturally subject to ethical and legal concerns, especially those regarding privacy. The EU General Data Protection Regulation (GDPR), for instance, defines a number of regulations such as the clear statement of terms and conditions or that users need to be provided, upon request, with full access to whatever data is stored on them. Lexi does not explicitly store users' names, but in many cases they will be encoded in email addresses. Personally identifiable information may also be stored in the form of simplified text that is logged in the database, for instance if Lexi is used on a user's personal social media profile. The above also highlights the need for encrypted communication between the client and the server, which is safeguarded through TLS encryption over the HTTPS protocol.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethical and legal considerations", "sec_num": "4.5" }, { "text": "Ethical concerns pertaining to text simplification arise when infelicitous simplifications distort the meaning of a text and thus potentially misinform the reader. This is difficult to completely rule out, such that the user should clearly be informed of this possibility. Other concerns revolve around the hypothesis that reducing text complexity will \"dumb down\" the material and keep users at a low reading level by under-challenging them (Long and Ross, 1993) . However, as Rello et al. (2013) point out, \"anything which might help [dyslexics] to subjectively perceive reading as being easier, can potentially help them to avoid this vicious circle [of reading less and staying on a low reading level], even if no significant improvement in readability can be demonstrated.\"", "cite_spans": [ { "start": 442, "end": 463, "text": "(Long and Ross, 1993)", "ref_id": "BIBREF17" }, { "start": 478, "end": 497, "text": "Rello et al. (2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Ethical and legal considerations", "sec_num": "4.5" }, { "text": "The Lexi software and code, including its backend and frontend, are freely available for non-commercial use under a CC-BY-NC license, obtainable at https://www.readwithlexi.net. Researchers can set up their own, customized version of the software and distribute the browser extension to users. It is straightforward to modify features of the software such as offered languages or the exact resources used to induce the initial models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability and applications", "sec_num": "5" }, { "text": "Besides its core functionality, which we mapped out in the previous sections, Lexi has a number of alternate use cases, which we discuss in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability and applications", "sec_num": "5" }, { "text": "Preloaded simplifications Lexi's primary use case, as described earlier, is to provide simplifications to users as they select a span of text, which circumvents the need for a CWI module as only such items are simplified that the user explicitly solicits replacements for. Alternatively, users may wish to have the entire page simplified before they start reading. Lexi currently implements this functionality, letting the user solicit simplifications for the entire site via a click on the Lexi icon. As there is no personalized CWI module implemented yet, simplification targets are identified via a confidence threshold during Substitution Generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability and applications", "sec_num": "5" }, { "text": "Evaluation of simplification quality Via its rating function (Figure 3) , Lexi continuously tracks user satisfaction as a means of evaluating synchronic simplification quality as well as the diachronic development of model adaptation. An adaptive model that is continuously customized is expected to gradually improve the average rating it receives from the user. Data collection Lexi makes it possible to collect user choices over a longer period in order to create bigger simplification datasets. If sufficiently homogeneous subgroups can be identified across users, this data may give insight into their simplification needs, to build better simplification models for them.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 71, "text": "(Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "Availability and applications", "sec_num": "5" }, { "text": "Other plausible approaches may understand different users as different tasks and apply multi-task learning methods to transfer knowledge between users, thus both regularizing the models for the individual user and increasing the available amount of data that the individual models can be learned from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability and applications", "sec_num": "5" }, { "text": "This paper is a first work in personalized, adaptive text simplification, a direction of research motivated by the observation that generic, user-independent simplification systems cannot fully unfold their potential in making text simpler for specific end users. We propose a framework for adaptive lexical simplification, outlining how user feedback can be used to gradually enhance and personalize text simplification. As a concrete first solution to the problem, we present Lexi, an open-source tool for personalized, adaptive text simplification that has been very positively evaluated in a first usability test. In its current implementation, Lexi focuses on lexical simplification in Danish. An extension to other languages is simple, requiring only a medium-sized monolingual corpus on which a language model and word embeddings can be trained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "In future work, we aim to extend the proposed framework to sentence-level simplifications. We further plan to implement support for multilingual simplification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://creativecommons.org/licenses/by-nc/4.0/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An alternative to traditional, one-size-fits-all approaches has recently been proposed byBingel et al. (2018), who use eye-tracking measures to induce personalized models to predict misreadings in children with reading difficulties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that, in general, this paper describes the Lexi frontend and backend versions 1.0. Both parts of Lexi are under ongoing development, with details pertaining to the implementation possibly subject to change.4 https://developer.mozilla.org/en-US/Add-ons/WebExtensions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the instructions, users are made aware of this. The frontend further keeps track of how many times the user clicked on a given simplification span, thus providing the backend with information such as how many times the user clicked through the entire list, or whether perhaps no alternatives were solicited for some item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More correctly, feedback is not solicited when the user actual navigates away from the page, as security restrictions in browsers disallow custom scripts to run upon closing a page. Instead, Lexi asks for feedback via a small notification box in the upper right corner of the page, which pops up as the operating system's focus changes to a different window, or when the mouse leaves the browser's viewport (e.g. for the address bar).7 http://www.nota.dk 8 An informal evaluation of the software on a 5-point scale (with 1 being worst and 5 best) yielded two ratings of 5, one 4 and one 3.9 We do plan, however, to implement CWI as the user solicits simplifications for longer text passages or entire pages.10 In order to reduce bandwidth and modify the page more easily, the frontend only transmits the HTML source of the least HTML node fully containing the selection, which typically is a paragraph (

), but may also be a single word contained in a heading (e.g.

), in which case no context is available. Sentence boundaries are identified using NLTK.11 https://ordnet.dk/korpusdk", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Such a seed dataset is not necessarily available for any language. However, in its absence, a seed model could either be trained with simple heuristics, e.g. replacing infrequent words with higher-frequency synonyms. Alternatively, the system could choose to initially rank candidates with such a heuristic and only start learning once the first feedback is available.13 http://flask.pocoo.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "'html ': '

Natasja startede < span id =\" lexi_ 2 5 4 _ 1 \" class =\" lexisimplify \" > allerede som 1 3 -\u00e5 rig med at synge og DJ \\ ' e ...

' , 'html ': '

Natasja startede < span id =\" lexi_ 2 5 4 _ 1 \" class =\" lexisimplify \" > bare som 1 3 -\u00e5 rig med at synge og DJ \\ ' e ... ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Heterogeneity within autism spectrum disorder in forensic mental health: the introduction of typologies", "authors": [ { "first": "Regi", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "E", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Verity", "middle": [], "last": "Langdon", "suffix": "" }, { "first": "Magali", "middle": [], "last": "Chester", "suffix": "" }, { "first": "Ignatius", "middle": [], "last": "Barnoux", "suffix": "" }, { "first": "Sudeep", "middle": [], "last": "Gunaratna", "suffix": "" }, { "first": "", "middle": [], "last": "Hoare", "suffix": "" } ], "year": 2016, "venue": "Advances in Autism", "volume": "2", "issue": "4", "pages": "201--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regi Alexander, Peter E Langdon, Verity Chester, Magali Barnoux, Ignatius Gunaratna, and Sudeep Hoare. 2016. Heterogeneity within autism spectrum disorder in forensic mental health: the introduction of typologies. Ad- vances in Autism, 2(4):201-209.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using word semantics to assist english as a second language learners", "authors": [ { "first": "Mahmoud", "middle": [], "last": "Azab", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hokamp", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "116--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahmoud Azab, Chris Hokamp, and Rada Mihalcea. 2015. Using word semantics to assist english as a second language learners. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 116-120, Denver, Colorado, June. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neuropsychological classification and treatment of dyslexia", "authors": [ { "first": "J", "middle": [], "last": "Dirk", "suffix": "" }, { "first": "", "middle": [], "last": "Bakker", "suffix": "" } ], "year": 1992, "venue": "Journal of learning disabilities", "volume": "25", "issue": "2", "pages": "102--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dirk J Bakker. 1992. Neuropsychological classification and treatment of dyslexia. Journal of learning disabilities, 25(2):102-109.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Text simplification as tree labeling", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "337--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Bingel and Anders S\u00f8gaard. 2016. Text simplification as tree labeling. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 337-343.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Predicting misreadings from gaze in children with reading difficulties", "authors": [ { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Sigrid", "middle": [], "last": "Klerke", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "24--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Bingel, Maria Barrett, and Sigrid Klerke. 2018. Predicting misreadings from gaze in children with read- ing difficulties. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 24-34.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Putting it simply: a context-aware approach to lexical simplification", "authors": [ { "first": "Or", "middle": [], "last": "Biran", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Brody", "suffix": "" }, { "first": "No\u00e9mie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", "volume": "2", "issue": "", "pages": "496--501", "other_ids": {}, "num": null, "urls": [], "raw_text": "Or Biran, Samuel Brody, and No\u00e9mie Elhadad. 2011. Putting it simply: a context-aware approach to lexical simplification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 496-501. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Practical simplification of english newspaper text to assist aphasic readers", "authors": [ { "first": "John", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Guido", "middle": [], "last": "Minnen", "suffix": "" }, { "first": "Yvonne", "middle": [], "last": "Canning", "suffix": "" }, { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "John", "middle": [], "last": "Tait", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology", "volume": "", "issue": "", "pages": "7--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Carroll, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of the AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7-10.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Computational assessment of text readability: A survey of current and future research", "authors": [ { "first": "Kevyn", "middle": [], "last": "Collins-Thompson", "suffix": "" } ], "year": 2014, "venue": "ITL-International Journal of Applied Linguistics", "volume": "165", "issue": "2", "pages": "97--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevyn Collins-Thompson. 2014. Computational assessment of text readability: A survey of current and future research. ITL-International Journal of Applied Linguistics, 165(2):97-135.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Simple english wikipedia: a new text simplification task", "authors": [ { "first": "William", "middle": [], "last": "Coster", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", "volume": "2", "issue": "", "pages": "665--669", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Coster and David Kauchak. 2011. Simple english wikipedia: a new text simplification task. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technolo- gies: short papers-Volume 2, pages 665-669. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Text simplification for children", "authors": [ { "first": "Jan", "middle": [ "De" ], "last": "Belder", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2010, "venue": "Prroceedings of the SIGIR workshop on accessible search systems", "volume": "", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan De Belder and Marie-Francine Moens. 2010. Text simplification for children. In Prroceedings of the SIGIR workshop on accessible search systems, pages 19-26. ACM.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Helping aphasic people process online information", "authors": [ { "first": "Siobhan", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Unthank", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 8th SIGACCESS", "volume": "", "issue": "", "pages": "225--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siobhan Devlin and Gary Unthank. 2006. Helping aphasic people process online information. In Proceedings of the 8th SIGACCESS, pages 225-226.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An evaluation of syntactic simplification rules for people with autism", "authors": [ { "first": "Richard", "middle": [], "last": "Evans", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Orasan", "suffix": "" }, { "first": "Iustin", "middle": [], "last": "Dornescu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)", "volume": "", "issue": "", "pages": "131--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 131-140.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rest: architectural styles and the design of network-based software architectures. Doctoral dissertation", "authors": [ { "first": "Roy", "middle": [ "Thomas" ], "last": "Fielding", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Thomas Fielding. 2000. Rest: architectural styles and the design of network-based software architectures. Doctoral dissertation, University of California.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Simplifying lexical simplification: Do we need simplified corpora?", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "\u0160tajner", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd ACL", "volume": "", "issue": "", "pages": "63--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Sanja \u0160tajner. 2015. Simplifying lexical simplification: Do we need simplified corpora? In Proceedings of the 53rd ACL, pages 63-68, Beijing, China, July. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning a lexical simplifier using wikipedia", "authors": [ { "first": "Colby", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Cathryn", "middle": [], "last": "Manduca", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "458--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using wikipedia. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 458-463.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Annlor: a na\u00efve notation-system for lexical outputs ranking", "authors": [ { "first": "Anne-Laure", "middle": [], "last": "Ligozat", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Garcia-Fernandez", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Grouin", "suffix": "" }, { "first": "Delphine", "middle": [], "last": "Bernhard", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 6th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "487--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anne-Laure Ligozat, Anne Garcia-Fernandez, Cyril Grouin, and Delphine Bernhard. 2012. Annlor: a na\u00efve notation-system for lexical outputs ranking. In Proceedings of the 6th International Workshop on Semantic Evaluation, pages 487-492. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Simplification of example sentences for learners of japanese functional expressions", "authors": [ { "first": "Jun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Liu and Yuji Matsumoto. 2016. Simplification of example sentences for learners of japanese functional expressions. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 1-5.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modifications that preserve language and content", "authors": [ { "first": "H", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Long", "suffix": "" }, { "first": "", "middle": [], "last": "Ross", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael H Long and Steven Ross. 1993. Modifications that preserve language and content. Technical Report (ERIC).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lexenstein: A framework for lexical simplification", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2015. Lexenstein: A framework for lexical simplification. Proceedings of ACL-IJCNLP 2015 System Demonstrations, pages 85-90.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Anita: An intelligent text adaptation tool", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "79--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2016a. Anita: An intelligent text adaptation tool. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 79-83.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semeval 2016 task 11: Complex word identification", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "560--569", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2016b. Semeval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560-569.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Unsupervised lexical simplification for non-native speakers", "authors": [ { "first": "Gustavo", "middle": [ "Henrique" ], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 13th AAAI", "volume": "", "issue": "", "pages": "3761--3767", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Henrique Paetzold and Lucia Specia. 2016c. Unsupervised lexical simplification for non-native speakers. In Proceedings of the 13th AAAI, pages 3761-3767. AAAI Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Lexical simplification with neural ranking", "authors": [ { "first": "Gustavo", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th EACL", "volume": "", "issue": "", "pages": "34--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gustavo Paetzold and Lucia Specia. 2017. Lexical simplification with neural ranking. In Proceedings of the 15th EACL, pages 34-40. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dannet: the challenge of compiling a wordnet for danish by reusing a monolingual dictionary. Language resources and evaluation", "authors": [ { "first": "Sanni", "middle": [], "last": "Bolette Sandford Pedersen", "suffix": "" }, { "first": "J\u00f8rg", "middle": [], "last": "Nimb", "suffix": "" }, { "first": "Nicolai", "middle": [], "last": "Asmussen", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Hartvig S\u00f8rensen", "suffix": "" }, { "first": "Henrik", "middle": [], "last": "Trap-Jensen", "suffix": "" }, { "first": "", "middle": [], "last": "Lorentzen", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "269--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bolette Sandford Pedersen, Sanni Nimb, J\u00f8rg Asmussen, Nicolai Hartvig S\u00f8rensen, Lars Trap-Jensen, and Henrik Lorentzen. 2009. Dannet: the challenge of compiling a wordnet for danish by reusing a monolingual dictionary. Language resources and evaluation, 43(3):269-299.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Simplify or help?: text simplification strategies for people with dyslexia", "authors": [ { "first": "Luz", "middle": [], "last": "Rello", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Baeza-Yates", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bott", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Horacio Saggion. 2013. Simplify or help?: text simplification strategies for people with dyslexia. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, page 15. ACM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Frequency of word usage and perceived word difficulty: Ratings of kucera and francis words", "authors": [ { "first": "Allan", "middle": [], "last": "Peter Rudell", "suffix": "" } ], "year": 1993, "venue": "Behavior Research Methods", "volume": "", "issue": "", "pages": "455--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allan Peter Rudell. 1993. Frequency of word usage and perceived word difficulty: Ratings of kucera and francis words. Behavior Research Methods, pages 455-463.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Out in the open: Finding and categorising errors in the lexical simplification pipeline", "authors": [ { "first": "Matthew", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "1583--1590", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Shardlow. 2014a. Out in the open: Finding and categorising errors in the lexical simplification pipeline. In LREC, pages 1583-1590.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A survey of automated text simplification", "authors": [ { "first": "Matthew", "middle": [], "last": "Shardlow", "suffix": "" } ], "year": 2014, "venue": "International Journal of Advanced Computer Science and Applications", "volume": "4", "issue": "1", "pages": "58--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Shardlow. 2014b. A survey of automated text simplification. International Journal of Advanced Com- puter Science and Applications, 4(1):58-70.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A survey of research on text simplification", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2014, "venue": "ITL-International Journal of Applied Linguistics", "volume": "165", "issue": "2", "pages": "259--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Journal of Applied Linguistics, 165(2):259-298.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The effects of the amount and type of simplification on foreign language reading comprehension", "authors": [ { "first": "I", "middle": [], "last": "Adel", "suffix": "" }, { "first": "", "middle": [], "last": "Tweissi", "suffix": "" } ], "year": 1998, "venue": "", "volume": "11", "issue": "", "pages": "191--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adel I Tweissi. 1998. The effects of the amount and type of simplification on foreign language reading compre- hension. Reading in a foreign language, 11(2):191-204.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Evaluation of a typology of reading disability", "authors": [ { "first": "U", "middle": [], "last": "Betty", "suffix": "" }, { "first": "David", "middle": [ "E" ], "last": "Watson", "suffix": "" }, { "first": "", "middle": [], "last": "Goldgar", "suffix": "" } ], "year": 1988, "venue": "Journal of clinical and experimental neuropsychology", "volume": "10", "issue": "4", "pages": "432--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Betty U Watson and David E Goldgar. 1988. Evaluation of a typology of reading disability. Journal of clinical and experimental neuropsychology, 10(4):432-450.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia", "authors": [ { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" }, { "first": "Lillian", "middle": [ "Lee" ], "last": "", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "365--368", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 365-368. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A Report on the Complex Word Identification Shared Task", "authors": [ { "first": "Chris", "middle": [], "last": "Seid Muhie Yimam", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Paetzold", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Specia", "suffix": "" }, { "first": "", "middle": [], "last": "\u0160tajner", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 13th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seid Muhie Yimam, Chris Biemann, Shervin Malmasi, Gustavo Paetzold, Lucia Specia, Sanja \u0160tajner, Ana\u00efs Tack, and Marcos Zampieri. 2018. A Report on the Complex Word Identification Shared Task 2018. In Proceedings of the 13th Workshop on Innovative Use of NLP for Building Educational Applications, New Orleans, United States, June. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A monolingual tree-based translation model for sentence simplification", "authors": [ { "first": "Zhemin", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Delphine", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd international conference on computational linguistics", "volume": "", "issue": "", "pages": "1353--1361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd international conference on computational linguistics, pages 1353-1361. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Developmental dyslexia and the dual route model of reading: Simulating individual differences and subtypes", "authors": [ { "first": "C", "middle": [], "last": "Johannes", "suffix": "" }, { "first": "Caroline", "middle": [], "last": "Ziegler", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Castel", "suffix": "" }, { "first": "Florence", "middle": [], "last": "Pech-Georgel", "suffix": "" }, { "first": "F-Xavier", "middle": [], "last": "George", "suffix": "" }, { "first": "Conrad", "middle": [], "last": "Alario", "suffix": "" }, { "first": "", "middle": [], "last": "Perry", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "107", "issue": "1", "pages": "151--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes C Ziegler, Caroline Castel, Catherine Pech-Georgel, Florence George, F-Xavier Alario, and Conrad Perry. 2008. Developmental dyslexia and the dual route model of reading: Simulating individual differences and subtypes. Cognition, 107(1):151-178.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "User account registration form Figure 3: Five-point rating form Figure 4: Simplification spans are marked up in light green. As the user clicks on a simplification span, the currently displayed word is replaced with an alternative." } } } }