|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:57:49.297854Z" |
|
}, |
|
"title": "Semi-Supervised Joint Estimation of Word and Document Readability", |
|
"authors": [ |
|
{ |
|
"first": "Yoshinari", |
|
"middle": [], |
|
"last": "Fujinuma", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Colorado Boulder", |
|
"location": {} |
|
}, |
|
"email": "fujinumay@gmail.com" |
|
}, |
|
{ |
|
"first": "Masato", |
|
"middle": [], |
|
"last": "Hagiwara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Octanove Labs", |
|
"location": {} |
|
}, |
|
"email": "masato@octanove.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Readability or difficulty estimation of words and documents has been investigated independently in the literature, often assuming the existence of extensive annotated resources for the other. Motivated by our analysis showing that there is a recursive relationship between word and document difficulty, we propose to jointly estimate word and document difficulty through a graph convolutional network (GCN) in a semi-supervised fashion. Our experimental results reveal that the GCN-based method can achieve higher accuracy than strong baselines, and stays robust even with a smaller amount of labeled data. 1", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Readability or difficulty estimation of words and documents has been investigated independently in the literature, often assuming the existence of extensive annotated resources for the other. Motivated by our analysis showing that there is a recursive relationship between word and document difficulty, we propose to jointly estimate word and document difficulty through a graph convolutional network (GCN) in a semi-supervised fashion. Our experimental results reveal that the GCN-based method can achieve higher accuracy than strong baselines, and stays robust even with a smaller amount of labeled data. 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Accurately estimating the readability or difficulty of words and text has been an important fundamental task in NLP and education, with a wide range of applications including reading resource suggestion (Heilman et al., 2008 ), text simplification (Yimam et al., 2018) , and automated essay scoring (Vajjala and Rama, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 224, |
|
"text": "(Heilman et al., 2008", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 268, |
|
"text": "(Yimam et al., 2018)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 323, |
|
"text": "(Vajjala and Rama, 2018)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A number of linguistic resources have been created either manually or semi-automatically for nonnative learners of languages such as English (Capel, 2010 (Capel, , 2012 , French (Fran\u00e7ois et al., 2014) , and Swedish (Fran\u00e7ois et al., 2016; Alfter and Volodina, 2018) , often referencing the Common European Framework of Reference (Council of Europe, 2001, CEFR) . However, few linguistic resources exist outside these major European languages and manually constructing such resources demands linguistic expertise and efforts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 153, |
|
"text": "(Capel, 2010", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 168, |
|
"text": "(Capel, , 2012", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 201, |
|
"text": "(Fran\u00e7ois et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 239, |
|
"text": "(Fran\u00e7ois et al., 2016;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 266, |
|
"text": "Alfter and Volodina, 2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 361, |
|
"text": "(Council of Europe, 2001, CEFR)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This led to the proliferation of NLP-based readability or difficulty assessment methods to automatically estimate the difficulty of words and texts (Vajjala and Meurers, 2012; Wang and Andersen, 2016; Alfter and Volodina, 2018; Vajjala and Rama, 2018; 1 Our code is at https://github.com/akkikiki/ diff_joint_estimate Figure 1 : Overview of the proposed GCN architecture which recursively connects word w i and document d j to exploit the recursive relationship of their difficulty. Settles et al., 2020) . However, bootstrapping lexical resources with difficulty information often assumes the existence of textual datasets (e.g., digitized coursebooks) annotated with difficulty. Similarly, many text readability estimation methods (Wang and Andersen, 2016; Xia et al., 2016) assume the existence of abundant lexical or grammatical resources annotated with difficulty information. Individual research studies focus only on one side, either words or texts, although in reality they are closely intertwined-there is a recursive relationship between word and text difficulty, where the difficulty of a word is correlated to the minimum difficulty of the document where that word appears, and the difficulty of a document is correlated to the maximum difficulty of a word in that document ( Figure 2 ). We propose a method to jointly estimate word and text readability in a semi-supervised fashion from a smaller number of labeled data by leveraging the recursive relationship between words and documents. Specifically, we leverage recent developments in graph convolutional networks (Kipf and Welling, 2017, GCNs) and predict the difficulty of words and documents simultaneously by modeling those as nodes in a graph structure and recursively inferring their embeddings using the convolutional layers ( Figure 1 ). Our model leverages not only the supervision signals but also the recursive nature of word-document relationship. The contributions of this paper are two fold:", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 175, |
|
"text": "(Vajjala and Meurers, 2012;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 176, |
|
"end": 200, |
|
"text": "Wang and Andersen, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 227, |
|
"text": "Alfter and Volodina, 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 251, |
|
"text": "Vajjala and Rama, 2018;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 253, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 504, |
|
"text": "Settles et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 758, |
|
"text": "(Wang and Andersen, 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 759, |
|
"end": 776, |
|
"text": "Xia et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 318, |
|
"end": 326, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1288, |
|
"end": 1296, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1801, |
|
"end": 1809, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We reframe the word and document readability estimation task as a semi-supervised, joint estimation problem motivated by their recursive relationship of difficulty. \u2022 We show that GCNs are effective for solving this by exploiting unlabeled data effectively, even when less labeled data is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given a set of words W and documents D, the goal of the joint readability estimation task is to find a function f that maps both words and documents to their difficulty label f : W \u222a D \u2192 Y . Documents here can be text of an arbitrary length, although we use paragraphs as the basic unit of prediction. This task can be solved as a classification problem or a regression problem where Y \u2208 R. We use six CEFRlabels representing six levels of difficulty, such as Y \u2208 {A1 (lowest), A2, B1, B2, C1, C2 (highest)} for classification, and a real-valued readability estimate \u03b2 \u2208 R inspired by the item response theory (Lord, 1980, IRT) for regression 2 . The \u03b2 for each six CEFR level are A1= \u22121.38, A2= \u22120.67, B1= \u22120.21, B2= 0.21, C1= 0.67, and C2= 1.38.", |
|
"cite_spans": [ |
|
{ |
|
"start": 610, |
|
"end": 627, |
|
"text": "(Lord, 1980, IRT)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Words and documents consist of mutually exclusive unlabeled subsets W U and D U and labeled subsets W L and D L . The function f is inferred using the supervision signal from W L and D L , and potentially other signals from W U and D U (e.g., relationship between words and documents).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Definition", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We first show how the readability of words and documents are recursively related to each other. We then introduce a method based on graph convolutional networks (GCN) to capture such relationship. Word difficulty is correlated to the minimum difficulty of the document where that word appears, and document difficulty is correlated to the maximum difficulty of a word in that document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Exploiting Recursive Relationship by Graph Convolutional Networks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The motivation of using a graph-based method for difficulty classification is the recursive relationship of word and document difficulty. Figure 2 shows such recursive relationship using the difficultylabeled datasets explained in Section 5. One insight here is the strong correlation between the difficulty of a document and the maximum difficulty of a word in that document. This is intuitive and shares motivation with a method which exploits hierarchical structure of a document (Yang et al., 2016) . However, the key insight here is the strong correlation between the difficulty of a word and the minimum difficulty of a document where that word appears, indicating that the readability of words informs that of documents, and vise versa.", |
|
"cite_spans": [ |
|
{ |
|
"start": 483, |
|
"end": 502, |
|
"text": "(Yang et al., 2016)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 146, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recursive Relationship of Word and Document Difficulty", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To capture the recursive, potentially nonlinear relationship between word and document readability while leveraging supervision signals and features, we propose to use graph convolutional networks (Kipf and Welling, 2017, GCNs) specifically built for text classification (Yao et al., 2019) , which treats words and documents as nodes. Intuitively, the hidden layers in GCN, which recursively connects word and document nodes, encourage exploiting the recursive word-document relationship. Given a heterogeneous word-document graph G = (V, E) and its adjacency matrix A \u2208 R |V |\u00d7|V | , the hidden states for each layer H n \u2208 R |V |\u00d7hn in a GCN with N hidden layers is com-puted using the previous layer H n\u22121 as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 289, |
|
"text": "(Yao et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "H n = \u03c3(\u00c3H n\u22121 W n ) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where \u03c3 is the ReLU function 3 ,\u00c3 = D \u2212 1 2 AD \u2212 1 2 i.e., a symmetrically normalized matrix of A with its degree matrix D, and W n \u2208 R h n\u22121 \u00d7hn is the weight matrix for the nth layer. The input to the first layer H 1 is H 0 = X where X \u2208 R |V |\u00d7h 0 is the feature matrix with h 0 dimensions for each node in V . We use three different edge weights following Yao et al. 2019(1) A ij = tfidf ij if i is a document and j is a word, (2) the normalized point-wise mutual information (PMI) i.e., A ij = PMI(i, j) if both i and j are words, and (3) selfloops, i.e., A ii = 1 for all i.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We now describe the components which differs from Yao et al. (2019) . We use separate final linear layers for words and documents 4 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 67, |
|
"text": "Yao et al. (2019)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Z w = H N W w + b w (2) Z d = H N W d + b d", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where W and b are the weight and bias of the layer, and used a linear combination of word and document losses weighted by \u03b1 (Figure 1 )", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 133, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = \u03b1L(Z w ) + (1 \u2212 \u03b1)L(Z d )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For regression, we used Z (Z w for words and Z d for documents) as the prediction of node v and used the mean squared error (MSE):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L(Z) = 1 |V L | v\u2208V L (Z v \u2212 Y v ) 2 (5) where V L = W L \u222a D L", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is the set of labeled nodes. For classification, we use a softmax layer followed by a cross-entropy (CE) loss:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(Z) = \u2212 v\u2208V L log exp(Z v,Yv ) i exp(Z v,i ) .", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since GCN is transductive, node set V also includes the unlabeled nodes from the evaluation sets and have predicted difficulty labels assigned when training is finished.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3 A simplified version of GCN with linear layers (Wu et al., 2019) in preliminary experiments shows that hidden layers with ReLU performed better. 4 A model variant with a common linear layer (i.e., original GCN) for both words and documents did not perform as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 66, |
|
"text": "(Wu et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 147, |
|
"end": 148, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph Convolutional Networks on Word-Document Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Train Dev Test Words (CEFR-J + C1/C2) 2,043 447 389 Documents (Cambridge + A1) 482 103 98 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Datasets We use publicly available English CEFR-annotated resources for second language learners, such as CEFR-J (Negishi et al., 2013) Vocabulary Profile as words and Cambridge English Readability Dataset (Xia et al., 2016) as documents (Table 1 ). Since these two datasets lack C1/C2-level words and A1 documents, we hired a linguistic PhD to write these missing portions 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "(Negishi et al., 2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 224, |
|
"text": "(Xia et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 246, |
|
"text": "(Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Baselines We compare our method against methods used in previous work (Feng et al., 2010; Vajjala and Meurers, 2012; Martinc et al., 2019; Deutsch et al., 2020) : (1) logistic regression for classification (LR cls), (2) linear regression for regression (LR regr), (3) Gradient Boosted Decision Tree (GBDT), and (4) Hierarchical Attention Network (Yang et al., 2016, HAN) , which is reported as one of the state-of-the-art methods in readability assessment for documents (Martinc et al., 2019; Deutsch et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 89, |
|
"text": "(Feng et al., 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 90, |
|
"end": 116, |
|
"text": "Vajjala and Meurers, 2012;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 138, |
|
"text": "Martinc et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 160, |
|
"text": "Deutsch et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 370, |
|
"text": "(Yang et al., 2016, HAN)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 492, |
|
"text": "(Martinc et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 514, |
|
"text": "Deutsch et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Features For all methods except for HAN, we use both surface or \"traditional\" (Vajjala and Meurers, 2012) and embedding features on words and documents which are shown to be effective for readability estimation (Culligan, 2015; Settles et al., 2020; Deutsch et al., 2020) . For words, we use their length (in characters), the log frequency in Wikipedia (Ginter et al., 2017) , and GloVe (Pennington et al., 2014) . For documents, we use the number of NLTK (Loper and Bird, 2002) -tokenized words in a document, and the output of embeddings from BERT-base model (Devlin et al., 2019) which are averaged over all tokens in a given sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 105, |
|
"text": "(Vajjala and Meurers, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 227, |
|
"text": "(Culligan, 2015;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 249, |
|
"text": "Settles et al., 2020;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 271, |
|
"text": "Deutsch et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 374, |
|
"text": "(Ginter et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 412, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 478, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 582, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hyperparameters We conduct random hyperparameter search with 200 samples, separately selecting two different sets of hyperparameters, one optimized for word difficulty and the other for document. We set the number of hidden layers N = 2 with h n = 512 for documents and N = 1 with h n = 64 for words. See Appendix A for the details on other hyperparameters. Table 2 : Difficulty estimation results in accuracy (Acc) and correlation (Corr) on classification outputs converted to continuous values by taking the max (cls+m) or weighted sum (cls+w) and regression (regr) variants for the logistic regression (LR) and GCN.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 365, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Evaluation We use accuracy and Spearman's rank correlation as the metrics. When calculating the correlation for a classification model, we convert the discrete outputs into continuous values in two ways: (1) convert the CEFR label with the maximum probability into corresponding \u03b2 in Section 2, (cls+m), or (2) take a sum of all \u03b2 in six labels weighted by their probabilities (cls+w). Table 2 shows the test accuracy and correlation results. GCNs show increase in both document accuracy and word accuracy compared to the baseline. We infer that this is because GCN is good at capturing the relationship between words and documents. For example, the labeled training documents include an A1 document and that contains the word \"bicycle,\" and the difficulty label of the document is explicitly propagated to the \"bicycle\" word node, whereas the logistic regression baseline mistakenly predicts as A2-level, since it relies solely on the input features to capture its similarities. Table 3 shows the ablation study on the features explained in Section 4. By comparing Table 2 and Table 3 , which are experimented on the same datasets, GCN without using any traditional or embedding features (\"None\") shows comparative results to some baselines, especially on word-level accuracy. Therefore, the structure of the worddocument graph provides effective and complementary signal for readability estimation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 393, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 980, |
|
"end": 987, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1066, |
|
"end": 1086, |
|
"text": "Table 2 and Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Overall, the BERT embedding is a powerful fea- Table 3 : Ablation study on the features used. \"None\" is when applying GCN without any features (X = I i.e., one-hot encoding per node), which solely relies on the word-document structure of the graph.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 54, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Study on Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ture for predicting document readability on Cambridge Readabilty Dataset. Ablating the BERT embeddings (Table 3 ) significantly decreases the document accuracy (\u22120.112) which is consistent with the previous work (Martinc et al., 2019; Deutsch et al., 2020 ) that BERT being one of the bestperforming method for predicting document readability on one of the datasets they used, and HAN performing relatively low due to not using the BERT embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 234, |
|
"text": "(Martinc et al., 2019;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 255, |
|
"text": "Deutsch et al., 2020", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 111, |
|
"text": "(Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Study on Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To analyze whether GCN is robust when training dataset is small, we compare the baseline and GCN by varying the amount of labeled training data. In Figure 3 , we observe consistent improvement in GCN over the baseline especially in word accuracy. This outcome suggests that the performance of GCN stays robust even with smaller training data by exploiting the signals gained from the recursive word-document relationship and their structure. Another trend observed in Figure 3 is the larger gap in word accuracy compared to document accuracy when the training data is small likely due to GCN explicitly using context given by worddocument edges.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 156, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 476, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training on Less Labeled Data", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we proposed a GCN-based method to jointly estimate the readability on both words and documents. We experimentally showed that GCN achieves higher accuracy by capturing the recursive difficulty relationship between words and documents, even when using a smaller amount of labeled data. GCNs are a versatile framework that allows inclusion of diverse types of nodes, such as subwords, paragraphs, and even grammatical concepts. We leave this investigation as future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We assumed the difficulty estimate \u03b2 is normally distributed and used the mid-point of six equal portions of N (0, 1) when mapping CEFR levels to \u03b2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The dataset is available at https://github.com/ openlanguageprofiles/olp-en-cefrj.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank Adam Wiemerslage, Michael J. Paul, and anonymous reviewers for their detailed and constructive feedback. We also thank Kathleen Hall for her help with annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We conduct random hyperparameter search with 200 samples in the following ranges: \u03b1 \u2208 {0.1, 0.2, ..., 0.9}, the learning rate from {1, 2, 5, 10, 20, 50, 100} \u00d7 10 \u22124 , dropout probability from {0.1, 0.2, ..., 0.5}, the number of epochs from {250, 500, 1000, 1500, 2000}, the number of hidden units h n \u2208 {32, 64, 128, 256, 512, 1024}, the number of hidden layers from {1, 2, 3}, and the PMI window width from {disabled, 5, 10, 15, 20}.We now describe the selected best combination of hyperparameters for each setting. For GCN in the classification setting, the selected hyperparameters for document difficulty estimation are: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Hyperparameter Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Towards single word lexical complexity prediction", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Alfter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Volodina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-0508" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Alfter and Elena Volodina. 2018. Towards sin- gle word lexical complexity prediction. In Proceed- ings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A1-B2 vocabulary: insights and issues arising from the English Profile Wordlists project", |
|
"authors": [ |
|
{ |
|
"first": "Annette", |
|
"middle": [], |
|
"last": "Capel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "English Profile Journal", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annette Capel. 2010. A1-B2 vocabulary: insights and issues arising from the English Profile Wordlists project. English Profile Journal, 1.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Completing the english vocabulary profile: C1 and C2 vocabulary", |
|
"authors": [ |
|
{ |
|
"first": "Annette", |
|
"middle": [], |
|
"last": "Capel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "English Profile Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annette Capel. 2012. Completing the english vocabu- lary profile: C1 and C2 vocabulary. English Profile Journal, 3.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Common European Framework of Reference for Languages: Learning, Teaching", |
|
"authors": [], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Council of Europe. 2001. Common European Frame- work of Reference for Languages: Learning, Teach- ing, Assessment. Press Syndicate of the University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A comparison of three test formats to assess word difficulty", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brent Culligan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Language Testing", |
|
"volume": "32", |
|
"issue": "4", |
|
"pages": "503--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brent Culligan. 2015. A comparison of three test for- mats to assess word difficulty. Language Testing, 32(4):503-520.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Linguistic features for readability assessment", |
|
"authors": [ |
|
{ |
|
"first": "Tovly", |
|
"middle": [], |
|
"last": "Deutsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masoud", |
|
"middle": [], |
|
"last": "Jasbi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.bea-1.1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tovly Deutsch, Masoud Jasbi, and Stuart Shieber. 2020. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for automatic readability assessment", |
|
"authors": [ |
|
{ |
|
"first": "Lijun", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Jansche", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lijun Feng, Martin Jansche, Matt Huenerfauth, and No\u00e9mie Elhadad. 2010. A comparison of features for automatic readability assessment.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "FLELex: a graded lexical resource for French foreign learners", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Fran\u00e7ois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N\u00f9ria", |
|
"middle": [], |
|
"last": "Gala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Watrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C\u00e9drick", |
|
"middle": [], |
|
"last": "Fairon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Fran\u00e7ois, N\u00f9ria Gala, Patrick Watrin, and C\u00e9- drick Fairon. 2014. FLELex: a graded lexical re- source for French foreign learners. In Proceedings of the Language Resources and Evaluation Confer- ence.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "SVALex: a CEFR-graded lexical resource for Swedish foreign and second language learners", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Fran\u00e7ois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Volodina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Fran\u00e7ois, Elena Volodina, Ildik\u00f3 Pil\u00e1n, and Ana\u00efs Tack. 2016. SVALex: a CEFR-graded lexical resource for Swedish foreign and second language learners. In Proceedings of the Language Resources and Evaluation Conference.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University", |
|
"authors": [ |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juhani", |
|
"middle": [], |
|
"last": "Luotolahti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Filip Ginter, Jan Haji\u010d, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017. CoNLL 2017 shared task -automatically annotated raw texts and word embed- dings. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Retrieval of reading materials for vocabulary and reading practice", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Pino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxine", |
|
"middle": [], |
|
"last": "Eskenazi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman, Le Zhao, Juan Pino, and Maxine Es- kenazi. 2008. Retrieval of reading materials for vo- cabulary and reading practice. In Proceedings of the Third Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 80-88.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Semisupervised classification with graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kipf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In Proceedings of the International Con- ference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "NLTK: The natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. NLTK: The nat- ural language toolkit. In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Applications of Item Response Theory To Practical Testing Problems", |
|
"authors": [ |
|
{ |
|
"first": "Frederic", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lord", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frederic M. Lord. 1980. Applications of Item Response Theory To Practical Testing Problems. Lawrence Erlbaum Associates.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Supervised and unsupervised neural approaches to text readability", |
|
"authors": [ |
|
{ |
|
"first": "Matej", |
|
"middle": [], |
|
"last": "Martinc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Senja", |
|
"middle": [], |
|
"last": "Pollak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marko", |
|
"middle": [], |
|
"last": "Robnik-Sikonja", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matej Martinc, Senja Pollak, and Marko Robnik- Sikonja. 2019. Supervised and unsupervised neural approaches to text readability. CoRR, abs/1907.11779.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A progress report on the development of the CEFR-J", |
|
"authors": [ |
|
{ |
|
"first": "Masashi", |
|
"middle": [], |
|
"last": "Negishi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoko", |
|
"middle": [], |
|
"last": "Takada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukio", |
|
"middle": [], |
|
"last": "Tono", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Exploring language frameworks: Proceedings of the ALTE Krak\u00f3w Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "135--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masashi Negishi, Tomoko Takada, and Yukio Tono. 2013. A progress report on the development of the CEFR-J. In Exploring language frameworks: Pro- ceedings of the ALTE Krak\u00f3w Conference, pages 135-163.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Machine learning-driven language assessment", |
|
"authors": [ |
|
{ |
|
"first": "Burr", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Laflair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masato", |
|
"middle": [], |
|
"last": "Hagiwara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "247--263", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00310" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Burr Settles, Geoffrey T. LaFlair, and Masato Hagi- wara. 2020. Machine learning-driven language as- sessment. Transactions of the Association for Com- putational Linguistics, 8:247-263.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "On improving the accuracy of readability classification using insights from second language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Sowmya", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Detmar", |
|
"middle": [], |
|
"last": "Meurers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sowmya Vajjala and Detmar Meurers. 2012. On im- proving the accuracy of readability classification us- ing insights from second language acquisition. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Experiments with universal CEFR classification", |
|
"authors": [ |
|
{ |
|
"first": "Sowmya", |
|
"middle": [], |
|
"last": "Vajjala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taraka", |
|
"middle": [], |
|
"last": "Rama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-0515" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sowmya Vajjala and Taraka Rama. 2018. Experiments with universal CEFR classification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Grammatical templates: Improving text difficulty evaluation for language learners", |
|
"authors": [ |
|
{ |
|
"first": "Shuhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Andersen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuhan Wang and Erik Andersen. 2016. Grammatical templates: Improving text difficulty evaluation for language learners.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Simplifying graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianyi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amauri", |
|
"middle": [], |
|
"last": "Holanda De Souza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Fifty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the International Conference of Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr., Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. 2019. Simplifying graph convolutional networks. In Proceedings of the International Conference of Ma- chine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Text readability assessment for second language learners", |
|
"authors": [ |
|
{ |
|
"first": "Menglin", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Kochmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-0502" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Menglin Xia, Ekaterina Kochmar, and Ted Briscoe. 2016. Text readability assessment for second lan- guage learners. In Proceedings of the 11th Work- shop on Innovative Use of NLP for Building Educa- tional Applications.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Hierarchical attention networks for document classification", |
|
"authors": [ |
|
{ |
|
"first": "Zichao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diyi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Smola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1174" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Graph convolutional networks for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengsheng", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Association for the Advancement of Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Association for the Advancement of Artificial In- telligence.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A report on the complex word identification shared task 2018", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Seid Muhie Yimam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gustavo", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Paetzold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "\u0160tajner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-0507" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seid Muhie Yimam, Chris Biemann, Shervin Mal- masi, Gustavo Paetzold, Lucia Specia, Sanja \u0160tajner, Ana\u00efs Tack, and Marcos Zampieri. 2018. A report on the complex word identification shared task 2018. In Proceedings of the Thirteenth Workshop on Inno- vative Use of NLP for Building Educational Appli- cations.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": ") = Conv(H1(w3)) H1(d4) = Conv(H0(w1), H0(w3), H0(w4))", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Recursive relationship of word/document difficulty.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Word and document accuracy with different amount of training data used.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"text": "Dataset size for words and documents", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |