{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:53.680127Z" }, "title": "Using Linguistic Features to Predict the Response Process Complexity Associated with Answering Clinical MCQs", "authors": [ { "first": "Victoria", "middle": [], "last": "Yaneva", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Board of Medical Examiners", "location": { "settlement": "Philadelphia", "country": "USA" } }, "email": "vyaneva@nbme.org" }, { "first": "Daniel", "middle": [], "last": "Jurich", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Board of Medical Examiners", "location": { "settlement": "Philadelphia", "country": "USA" } }, "email": "djurich@nbme.org" }, { "first": "Le", "middle": [ "An" ], "last": "Ha", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Wolverhamton", "location": { "country": "UK" } }, "email": "" }, { "first": "Peter", "middle": [], "last": "Baldwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Board of Medical Examiners", "location": { "settlement": "Philadelphia", "country": "USA" } }, "email": "pbaldwin@nbme.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This study examines the relationship between the linguistic characteristics of a test item and the complexity of the response process required to answer it correctly. Using data from a large-scale medical licensing exam, clustering methods identified items that were similar with respect to their relative difficulty and relative response-time intensiveness to create low response process complexity and high response process complexity item classes. Interpretable models were used to investigate the linguistic features that best differentiated between these classes from a descriptive and predictive framework. Results suggest that nuanced features such as the number of ambiguous medical terms help explain response process complexity beyond superficial item characteristics such as word count. Yet, although linguistic features carry signal relevant to response process complexity, the classification of individual items remains challenging.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This study examines the relationship between the linguistic characteristics of a test item and the complexity of the response process required to answer it correctly. Using data from a large-scale medical licensing exam, clustering methods identified items that were similar with respect to their relative difficulty and relative response-time intensiveness to create low response process complexity and high response process complexity item classes. Interpretable models were used to investigate the linguistic features that best differentiated between these classes from a descriptive and predictive framework. Results suggest that nuanced features such as the number of ambiguous medical terms help explain response process complexity beyond superficial item characteristics such as word count. Yet, although linguistic features carry signal relevant to response process complexity, the classification of individual items remains challenging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The success of high-stakes exams, such as those used in licensing, certification, and college admission, depends on the use of items (test questions) that meet stringent quality criteria. To provide useful information about examinee ability, good items must be neither too difficult, nor too easy for the intended test-takers. Furthermore, the timing demands of items should be such that different exam forms seen by different test-takers should entail similar times to complete. Nevertheless, while an extreme difficulty or mean response time can indicate that an item is not functioning correctly, within these extremes variability in difficulty and item response time is expected. For good items, it is hoped that this variability simply reflects the breadth and depth of the relevant exam content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The interaction between item difficulty (as measured by the proportion of examinees who respond correctly) and time intensiveness (as measured by the average time examinees spend answering) can help quantify the complexity of the response process associated with an item. This is valuable, since the more we know about the way examinees think about the problem presented in an item, the better we can evaluate exam validity. Although easier items usually require less time than difficult items, the interaction between these two item properties is not strictly linear -examinees may spend very little time responding to certain difficult items and, likewise, examinees may spend a great deal of time on items that are relatively easy. The idea of response process complexity is best illustrated with items that have similar difficulty but different mean response times. In such cases, one item may require the formation of a complex cognitive model of the problem and thus take a long time, while another item with a similar level of difficulty may require factual knowledge that few examinees recall (or that many recall incorrectly) and thus take a short time on average. The interaction between item difficulty and time intensity can therefore provide valuable information about the complexity of the response process demanded by an item, which, we argue, can be further explained by examining the linguistic properties of the item.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we use a data-driven approach to capture the interaction between item difficulty and response time within a pool of 18,961 multiplechoice items from a high-stakes medical exam, where each item was answered by 335 examinees on average. For our data, this resulted in the definition of two clusters, one of which consisted of items that are relatively easy and less time-intensive, and another one which consisted of items that are relatively difficult and/or time-intensive. For the purposes of this study, we name these two clusters low-complexity class and high-complexity class, respectively. The use of the term response process A 16-year-old boy is brought to the emergency department because of a 2-day history of fever, nausea, vomiting, headache, chills, and fatigue. He has not had any sick contacts. He underwent splenectomy for traumatic injury at the age of 13 years. He has no other history of serious illness and takes no medications. He appears ill. His temperature is 39.2\u00b0C (102.5\u00b0F), pulse is 130/min, respirations are 14/min, and blood pressure is 110/60 mm Hg. On pulmonary examination, scattered crackles are heard bilaterally. Abdominal shows a well-healed midline scar and mild, diffuse tenderness to palpation. Which of the following is the most appropriate next step in management? (A) Antibiotic therapy (B) Antiemetic therapy (C) CT scan of the chest (D) X-ray of the abdomen (E) Reassurance Table 1 : An example of a practice item complexity here is not based on an operational definition of this construct, which would require extensive research on its own, but rather, as a succinct label that summarises the differences between the two classes along the interaction of empirical item difficulty and item time intensiveness.", "cite_spans": [], "ref_spans": [ { "start": 1433, "end": 1440, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Studying the linguistic characteristics of these two categories may help test developers gain a more nuanced understanding of how cognitively complex items differ from those with a straightforward solution. Provided that strong relationships are found, such insight can also be used to guide item writers or inform innovative automated item generation algorithms when seeking to create high-or low-complexity items. For this reason, our goal is not to train a black-box model to predict item complexity; instead, our goal is to isolate interpretable relationships between item text and item complexity that can inform our understanding of the response process and provide better itemwriting strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to its utility for improving highstakes exams, the problem of modeling response process complexity is interesting from an NLP perspective because it requires the modeling of cognitive processes beyond reading comprehension. This is especially relevant for the data used here because, as we explain in Section 3 below, the items in our bank assess expert-level clinical knowledge and are written to a common reading level using standardized language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contributions: i) We use unsupervised clustering to define classes of high and low responseprocess complexity from a large sample of items and test-takers in a high-stakes medical exam; ii) the study provides empirical evidence that linguistic characteristics carry signal relevant to an item's response process complexity; iii) the most predictive features are identified through several feature selection methods and their potential relationship to response process complexity is discussed; iv) the errors made by the model and their implications for predicting response process complexity are analysed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section discusses related work on the topics of modeling item difficulty and response time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Most NLP studies modeling the difficulty of test questions for humans have been conducted in the domain of reading comprehension, where the readability of reading passages is associated with the difficulty of their corresponding comprehension questions (Huang et al., 2017; Beinborn et al., 2015; Loukina et al., 2016) . For other exams, taxonomies representing knowledge dimensions and cognitive processes involved in the completion of a test task have been used to predict the difficulty of short-answer questions (Pad\u00f3, 2017) and identify skills required to answer school science questions (Nadeem and Ostendorf, 2017) . Difficulty prediction has also been explored in the context of evaluating automatically generated questions (Alsubait et al., 2013; Ha and Yaneva, 2018; Kurdi, 2020; through measures such as question-answer similarity.", "cite_spans": [ { "start": 253, "end": 273, "text": "(Huang et al., 2017;", "ref_id": "BIBREF10" }, { "start": 274, "end": 296, "text": "Beinborn et al., 2015;", "ref_id": "BIBREF3" }, { "start": 297, "end": 318, "text": "Loukina et al., 2016)", "ref_id": "BIBREF14" }, { "start": 516, "end": 528, "text": "(Pad\u00f3, 2017)", "ref_id": "BIBREF18" }, { "start": 593, "end": 621, "text": "(Nadeem and Ostendorf, 2017)", "ref_id": "BIBREF17" }, { "start": 732, "end": 755, "text": "(Alsubait et al., 2013;", "ref_id": "BIBREF0" }, { "start": 756, "end": 776, "text": "Ha and Yaneva, 2018;", "ref_id": "BIBREF7" }, { "start": 777, "end": 789, "text": "Kurdi, 2020;", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Response time prediction has mainly been explored in the field of educational testing using predictors such as item presentation position (Parshall et al., 1994) , item content category (Parshall et al., 1994; Smith, 2000) , the presence of a figure (Smith, 2000; Swanson et al., 2001) , and item difficulty and discrimination (Halkitis et al., 1996; Smith, 2000) . The only text-related feature explored in these studies was word count, and it was shown to have a very limited predictive power in most domains.", "cite_spans": [ { "start": 138, "end": 161, "text": "(Parshall et al., 1994)", "ref_id": "BIBREF19" }, { "start": 186, "end": 209, "text": "(Parshall et al., 1994;", "ref_id": "BIBREF19" }, { "start": 210, "end": 222, "text": "Smith, 2000)", "ref_id": "BIBREF22" }, { "start": 250, "end": 263, "text": "(Smith, 2000;", "ref_id": "BIBREF22" }, { "start": 264, "end": 285, "text": "Swanson et al., 2001)", "ref_id": "BIBREF23" }, { "start": 327, "end": 350, "text": "(Halkitis et al., 1996;", "ref_id": "BIBREF9" }, { "start": 351, "end": 363, "text": "Smith, 2000)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several studies have explored the prediction of item difficulty and response time in the context of clinical multiple choice questions (MCQs). Ha et al. (2019) propose a large number of linguis-tic features and embeddings for modeling item difficulty. The results show that the full model outperforms several baselines with a statistically significant improvement, however, its practical significance for successfully predicting item difficulty remains limited, confirming the challenging nature of the problem. Continuations of this study include the use of transfer learning to predict difficulty and response time (Xue et al., 2020), as well as using predicted difficulty for filtering out items that are too easy or too difficult for the intended examinee population . used a broad range of linguistic features and embeddings (similar to those in Ha et al. 2019)to predict item response time, showing that a wide range of linguistic predictors at various levels of linguistic processing were all relevant to responsetime prediction. The predicted response times were then used in a subsequent experiment to improve fairness by reducing the time intensity variance of exam forms.", "cite_spans": [ { "start": 143, "end": 159, "text": "Ha et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The data 1 used in this study comprises 18,961", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Step 2 Clinical Knowledge items from the United States Medical Licensing Examination (USMLE \u00ae ), a large-scale high-stakes medical assessment. All items were MCQs. An example practice item 2 is given in Table 1 . The exam comprises several one-hour testing blocks with 40 items per block. All items test medical knowledge and are written by experienced item-writers following guidelines intended to produce items that vary in their difficulty and response times only due to differences in the medical content they assess. These guidelines stipulate that item writers adhere to a standard structure and avoid excessive verbosity, extraneous material not needed to answer the item, information designed to mislead the test-taker, and grammatical cues (e.g., correct answers that are more specific than the other options). All items were administered between 2010 and 2015 as pretest items and presented alongside scored items on operational exams. Examinees were medical students from accredited US and Canadian medical schools taking the exam for the first time and had no way of knowing which items were pretest items and which were 1 The data cannot be made available due to exam security considerations.", "cite_spans": [ { "start": 1133, "end": 1134, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 203, "end": 210, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "2 Source: https://www.usmle.org/pdfs/ step-2-ck/2020_Step2CK_SampleItems.pdf scored. On average, each item was attempted by 335 examinees (SD = 156.8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We base our definition of the two classes of items on empirical item difficulty and time intensity. Item difficulty is measured by the proportion of examinees who answered the item correctly, a metric commonly referred to by the educational testing community as p-value and calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "P i = N n=1 U n N ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "where P i is the p-value for item i, U n is the 0-1 score (incorrect-correct) on item i earned by examinee n, and N is the total number of examinees in the sample. Thus, difficulty measured in this way ranges from 0 to 1 and higher values correspond to easier items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "Time intensity is found by taking the arithmetic mean response time, measured in seconds, across all examinees who attempted a given item. This includes all time spent on the item from the moment it is presented on the screen until the examinee moves to the next item, as well as any revisits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "To assign items to classes, p-value and mean response time are rescaled such that each variable has a mean of 0 and a standard deviation of 1. Moreover, we use two quantitative methods to categorize items and retain only those items where there was agreement between the two methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "Method 1: Items were classified by applying a K-means clustering algorithm via the kmeans function in Python's Scikit-learn (Pedregosa et al., 2011) . K-means is an unsupervised data classification technique that discovers patterns in the data by assigning instances to a pre-defined number of classes (Wagstaff et al., 2001 ). This approach also allows us to evaluate the plausibility of categorizing items into more than two complexity classes, or whether the items fail to show any meaningful separation along the interaction of p-value and duration (one class). Results suggest that two classes best fit these data and identified 11,067 items as low complexity and 7,894 items as high complexity 3 .", "cite_spans": [ { "start": 124, "end": 148, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF20" }, { "start": 302, "end": 324, "text": "(Wagstaff et al., 2001", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "Method 2: Any item with a rescaled p-value greater than its rescaled mean response time -indicating that the item is relatively easier than it is time-consuming -is classified as low-complexity (11,682 items). Likewise, the remaining items, which had rescaled p-values less than their rescaled mean response times, were assigned to the highcomplexity class (7,279 items). Put another way, if an item takes less time than we would expect given its difficulty, the item is classified as low response process complexity and if it takes more time than we would expect, it is classified as high response process complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "The two methods achieved strong agreement, with only 673 (3.5%) items being assigned to different classes across methods. These discrepant items are excluded, leaving a total of 18,288 items for further analysis: 11,038 low-complexity items and 7,250 high-complexity ones. Figure 1 shows the class assignment, p-value, and mean response time for each item. As can be seen from the figure, the class of lowcomplexity items was dense and homogenous compared to the high-complexity class, meaning that it contained a large number of easy items whose response times were always below 125 seconds. The high-complexity class on the other hand was highly heterogeneous, with items whose response times and p-values spanned almost the entire scale.", "cite_spans": [], "ref_spans": [ { "start": 273, "end": 281, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Identifying items with high and low response process complexity", "sec_num": "3.1" }, { "text": "We use a set of interpretable linguistic features, many of which were previously used for predicting item difficulty (Ha et al., 2019) and response time in the domain of clinical MCQs. These features were extracted using code made available by Ha et al. (2019) and to these, we add several predictors specifically related to the medical content of the items, as well as standard item metadata.", "cite_spans": [ { "start": 117, "end": 134, "text": "(Ha et al., 2019)", "ref_id": "BIBREF8" }, { "start": 244, "end": 260, "text": "Ha et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4" }, { "text": "As noted, this study replicates the feature extraction procedure described and made available by Ha et al. (2019) . Approximately 90 linguistic features were extracted from each item's text (the full item including answer options) and are summarized in Table 2 . They span several levels of linguistic processing including surface lexical and syntactic features, semantic features that account for ambiguity, and cognitively motivated features that capture properties such as imageability and familiarity. Common readability formulae are used to account for surface reading difficulty. The organization of ideas in the text is captured through text cohesion features that measure the number and types of connective words within an item. Finally, word frequency features (including threshold frequencies) measure the extent to which items utilize frequent vocabulary.", "cite_spans": [ { "start": 97, "end": 113, "text": "Ha et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 253, "end": 261, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Linguistic features", "sec_num": "4.1" }, { "text": "Combinations of these features have the potential to capture different aspects of item content that are relevant to response complexity. For example, medical terms can be expected to have lower absolute frequencies and familiarity ratings, among other characteristics, and combinations of these features may suggest a higher density of medical terms and specialized language in some items compared to others. Another example is the temporal organization of the information about the patient history and symptoms described in the item and captured by temporal connectives, where it is reasonable to expect that more temporally intricate cases would require higher response process complexity to solve. Similarly, a high number of causal connectives would indicate a higher complexity of causal relationships among the events that led to the patient seeing a doctor, which may also be associated with higher cognitive demands.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic features", "sec_num": "4.1" }, { "text": "This group of features relates to the medical content of the items by mapping terms and phrases in the text to medical concepts contained in the Unified Medical Language System (UMLS) Metathesaurus (Schuyler et al., 1993) the item contains (note that a given term found in the items can refer to multiple UMLS concepts). First, we ask: how many of the words and phrases in the items are medical terms? This information is captured by UMLS Terms Count, indicating the number of terms in an item that appear in the UMLS wherein each instance of a given term contributes to the total count, as well as UMLS Distinct Terms Count: the number of terms in an item that appear in the UMLS wherein multiple instances of a given term contribute only once to the total count. The same kinds of counts are done for medical phrases -UMLS Phrases Count refers to the number of phrases in an item. For example, Metamap maps 'ocular complications of myasthenia gravis' to two phrases: the noun phrase 'ocular complications' and the prepositional phrase 'of myasthenia gravis' (Aronson, 2001) .", "cite_spans": [ { "start": 198, "end": 221, "text": "(Schuyler et al., 1993)", "ref_id": "BIBREF21" }, { "start": 1060, "end": 1075, "text": "(Aronson, 2001)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Clinical content features", "sec_num": "4.2" }, { "text": "Next, we introduce features that measure the ambiguity of medical terms within the items. These include Average Number of Competing UMLS Concepts Per Term Count, which captures the average number of UMLS concepts that a term could be referring to, averaged for all terms in an item, and weighted by the number of times Metamap returns the term. A similar version of this feature but without weighting by the number of times Metamap returns the term is Average Number of UMLS Concepts Per Term Count. This metric is then computed at the level of sentences and items, resulting in: Average Number of UMLS Concepts per Sentence, which measures the medical ambigu-ity of sentences and UMLS Concept Count, which measures item medical ambiguity through the total number of UMLS concepts all terms in an item could refer to. Finally, UMLS concept incidence refers to the number of UMLS concepts per 1000 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clinical content features", "sec_num": "4.2" }, { "text": "This group of features refers to metadata describing item content. Presence of an image is a binary categorical variable indicating whether the item includes an image such an X-ray or an MRI that needs to be examined. Another variable is Content category, which describes 18 generic topic categories such as \"Cardiovascular\", \"Gastrointestinal\", \"Behavioral Health\", 'Immune System\", and so on. Another variable, Physician Task describes tasks required by the item, e.g., determine a diagnosis, choose the correct medicine, apply foundational science concepts, and others. Finally, we also include the Year the item was administered as a predictor (2010 -2015) to account for potential changes in response process complexity and examinee samples over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Item Features", "sec_num": "4.3" }, { "text": "This section describes three baseline models (Section 4.5), the training of classifiers using the full feature set (Section 4.6), and the feature selection procedures (Section 4.7). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification", "sec_num": "4.4" }, { "text": "Three classification baselines were computed to benchmark the predictive benefit given by linguistics features over standard item characteristics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.5" }, { "text": "Majority Class Baseline: Since the lowcomplexity class contains a higher number of items, it is more likely that an item would be correctly predicted as belonging to this class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.5" }, { "text": "Word Count: This baseline examines the possibility that response process complexity is simply a function of item length.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "4.5" }, { "text": "This baseline comprises Word count, Presence of an image, Content category, Physician task and Year. This model reflects the standard item characteristics that most testing organizations would routinely store.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Standard Item Features:", "sec_num": null }, { "text": "After scaling the features, two models were fit using Python's scikit-learn library and the full set of features: a logistic regression model and a random forests one (400 trees). Twenty percent of the data (3,658 items) were used as a test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Full feature models", "sec_num": "4.6" }, { "text": "Feature selection was undertaken to better understand which features were most strongly associated with class differences. The selection process utilized three distinct strategies, where the final set of selected features comprises only those features retained by all three methods. After applying feature selection to the training set, the predictive performance of the selected features is evaluated on the test set and compared to the performance of the full feature set and the baseline models outlined above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "4.7" }, { "text": "Embedded methods: The first method is LASSO regularized regression wherein the coefficients of variables that have low contributions towards the classification performance are shrunk to zero by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value. We use the LassoCV algorithm with 100-fold cross validation and maximum iterations set to 5,000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "4.7" }, { "text": "Wrapper methods: We next apply recursive feature elimination, performed using two different classification algorithms: random forests classifier (400 trees, step = 5) and gradient boosting classifier (Friedman, 2002) (default parameters, step = 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature selection", "sec_num": "4.7" }, { "text": "The final set of selected linguistic features comprised 57 features that were retained by all three strategies. These features and their evaluation are discussed in sections 5 and 7. Table 3 presents the classification results for the baselines, the full feature set, and the selected features for both logistic regression and random forests. Results are reported using a weighted F1 score, which is a classification accuracy measure based on the mean between the precision and recall after adjusting for class imbalance.", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 190, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Feature selection", "sec_num": "4.7" }, { "text": "The linguistic and clinical content features improve predictive accuracy above the baselines, yielding a higher F1 score than the strongest baseline (.67 compared to .59). The reduced feature set does not lead to a meaningful performance drop compared to the full feature set, suggesting that no signal was lost due to feature elimination. Figure 2 reports the eight best-performing features: UMLS phrases count, Unique word count, Polysemic word count, Average noun phrase length, Automated readability index, Prepositional phrases, UMLS distinct terms count, and Concreteness ratio.", "cite_spans": [], "ref_spans": [ { "start": 340, "end": 348, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The output of the selected-features prediction model was analyzed further in order to get insight into this model's performance. As could be expected, the majority class of low-complexity items was predicted more accurately than the highcomplexity class, as shown by the confusion matrix in Table 4 . An interesting observation was made during a follow-up classification experiment, which showed that this effect remained when using balanced classes 4 . This shows that the success in predicting this class cannot be attributed solely to its prevalence but potentially also to its high homogeneity compared to the high-complexity class. Next, we plot the model errors across the two classes of low-complexity and high-complexity items, as shown in Figure 3 . Notably, items with average response times below 150 seconds were predicted as low-complexity most of the time, with minimal consideration of their p-value. This shows that what the model effectively learned was to distinguish between items with long and short mean response times, which overpowered its ability to predict the p-value parameter. This finding is consistent with previous work, where response times in were predicted more successfully than p-value using a similar set of linguistic features in Ha et al. (2019) . Finally, analysis of the feature distributions across these four classes revealed no unexpected patterns.", "cite_spans": [ { "start": 1268, "end": 1284, "text": "Ha et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 291, "end": 298, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 748, "end": 756, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Error analysis", "sec_num": "6" }, { "text": "The results presented in the previous section lead to three main findings: i) the linguistic characteristics of the items carry signal relevant to response 4 Classes were balanced using the balanced subample setting of the class weight parameter in Scikit-learn's RandomForrestClassifier process complexity; ii) no individual features stand out as strong predictors, and iii) the most important features were those related to syntax and semantics.", "cite_spans": [ { "start": 156, "end": 157, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The first of these findings relates to the fact that the linguistic characteristics of the items carry signal that is predictive of response process complexity, revealing that the problems posed by lowcomplexity and high-complexity items are described using slightly different language. While this signal outperformed several baselines, the overall low predictive utility of the models suggests that there are other factors, yet to be captured, that have a significant effect on response process complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The retention of 56 features indicates that individual linguistic predictors provide a weak classification signal but, taken together, they complement each other in a way that ultimately provides a higher accuracy. The fact that there are many predictive features with none standing out is also a positive evaluation outcome for item writing quality, as it shows that the response process complexity associated with an item is not distributed along a small number of linguistic parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The most important features that helped with classification were those related to syntax and semantics ( Figure 2 ). The poor performance of the Word Count baseline suggests that differences in response process complexity cannot be explained solely by item length and that more complex linguistic features capture some of the nuance in the response process. As can be seen in Figure 2 , high-complexity items contain a slightly higher number of UMLS phrases and (distinct) medical terms, as well as a higher number of unique words. These features suggest high-complexity items re- peat words less frequently and may contain a higher concentration of new information and specialized terminology than low-complexity items. The individual phrases in high-complexity items are also slightly longer, which naturally influences readability metrics that are based on word and sentence length, such as the Automated Readability Index (higher values are indicative of a more complex text). Prepositional phrases were also identified as more important than other phrase types in distinguishing between response process complexity. Prepositional phrases often serve as modifiers of the primary noun phrase and the higher number of prepositional phrases in the high-complexity items suggests the use of more specific descriptions (e.g., \"small cell carcinoma of the ovary\" instead of just \"small cell carcinoma\"). The words contained in the high-complexity items also have slightly higher concreteness levels, providing another indication that they may contain more terms, as terms tend to be more concrete than common words. Finally, the words contained in the high-complexity items also tend to have more possible meanings, as indicated by the polysemous word count variable, which results in higher complexity owing to disambiguation efforts. Overall, these features indicate that the language used in the low-complexity items is less ambiguous and descriptive, and potentially contains fewer medical terms.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 376, "end": 384, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "One limitation of the study is the fact that it treats item difficulty and time intensiveness as independent variables. This may not always be the case, as examinees do employ strategies to optimize their time. Given finite time limits, examinees may ig-nore time intensive items if they believe the time needed for such items can be better utilized attempting other, less time intensive items. Therefore, the relationship between difficulty and response time and their association with item text would differ for exams that do not impose strict time limits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "When using data-driven approaches to defining item classes, our data did not lend itself to a categorization that would allow investigating high difficulty/low response time items and vice-versa. While the approach taken in this paper has a higher ecological validity, studying such cases in the future may lead to a greater understanding of various aspects of response process complexity and their relationship to item text. Other future work includes exploration of potential item position effects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "The experiments presented in this paper are, to the best of our knowledge, the first investigation of the relationship between item text and response process complexity. The results showed that such a relationship exists. To the extent that items were written as clearly and as concisely as possible, the findings suggest that high-complexity medical items generally include longer phrases, more medical terms, and more specific descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "While the models outperformed several baselines, they required a large number of features to do so and the predictive utility remained low. Ultimately, this shows the challenging nature of modeling response process complexity using interpretable models and the lack of a straightforward way to manipulate this item property.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We also experimented with hierarchical clustering, which led to similar results. The hierarchical clustering dendrogram suggested that there are meaningful distances between two clusters in the data, and much smaller distances between a higher number of more fine-grained clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A similarity-based theory of controlling mcq difficulty", "authors": [ { "first": "Tahani", "middle": [], "last": "Alsubait", "suffix": "" }, { "first": "Bijan", "middle": [], "last": "Parsia", "suffix": "" }, { "first": "Ulrike", "middle": [], "last": "Sattler", "suffix": "" } ], "year": 2013, "venue": "e-Learning and e-Technologies in Education (ICEEE), 2013 Second International Conference on", "volume": "", "issue": "", "pages": "283--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahani Alsubait, Bijan Parsia, and Ulrike Sattler. 2013. A similarity-based theory of controlling mcq diffi- culty. In e-Learning and e-Technologies in Edu- cation (ICEEE), 2013 Second International Confer- ence on, pages 283-288. IEEE.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Effective mapping of biomedical text to the umls metathesaurus: the metamap program", "authors": [ { "first": "", "middle": [], "last": "Alan R Aronson", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the AMIA Symposium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan R Aronson. 2001. Effective mapping of biomed- ical text to the umls metathesaurus: the metamap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Associa- tion.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using natural language processing to predict item response times and improve test construction", "authors": [ { "first": "Peter", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Yaneva", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Mee", "suffix": "" }, { "first": "Brian", "middle": [ "E" ], "last": "Clauser", "suffix": "" }, { "first": "Le", "middle": [ "An" ], "last": "Ha", "suffix": "" } ], "year": 2020, "venue": "Journal of Educational Measurement", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Baldwin, Victoria Yaneva, Janet Mee, Brian E Clauser, and Le An Ha. 2020. Using natural lan- guage processing to predict item response times and improve test construction. Journal of Educational Measurement.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Candidate evaluation strategies for improved difficulty prediction of language tests", "authors": [ { "first": "Lisa", "middle": [], "last": "Beinborn", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2015. Candidate evaluation strategies for improved difficulty prediction of language tests. In Proceed- ings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1-11.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mrc psycholinguistic database", "authors": [], "year": 1981, "venue": "The Quarterly Journal of Experimental Psychology Section A", "volume": "33", "issue": "4", "pages": "497--505", "other_ids": { "DOI": [ "10.1080/14640748108400805" ] }, "num": null, "urls": [], "raw_text": "Max Coltheart. 1981. The mrc psycholinguistic database. The Quarterly Journal of Experimental Psychology Section A, 33(4):497-505.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Principles of Readability", "authors": [ { "first": "H", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Dubay", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William H. Dubay. 2004. The Principles of Readability. Impact Information.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Stochastic gradient boosting", "authors": [ { "first": "H", "middle": [], "last": "Jerome", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2002, "venue": "Computational statistics & data analysis", "volume": "38", "issue": "", "pages": "367--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerome H Friedman. 2002. Stochastic gradient boost- ing. Computational statistics & data analysis, 38(4):367-378.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic distractor suggestion for multiple-choice tests using concept embeddings and information retrieval", "authors": [ { "first": "An", "middle": [], "last": "Le", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Ha", "suffix": "" }, { "first": "", "middle": [], "last": "Yaneva", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "389--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le An Ha and Victoria Yaneva. 2018. Automatic distractor suggestion for multiple-choice tests using concept embeddings and information retrieval. In Proceedings of the Thirteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions, pages 389-398.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Predicting the difficulty of multiple choice questions in a high-stakes medical exam", "authors": [ { "first": "Le An", "middle": [], "last": "Ha", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Yaneva", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Balwin", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Mee", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Le An Ha, Victoria Yaneva, Peter Balwin, and Janet Mee. 2019. Predicting the difficulty of multiple choice questions in a high-stakes medical exam. In Proceedings of the Fourteenth Workshop on Innova- tive Use of NLP for Building Educational Applica- tions.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Estimating testing time: The effects of item characteristics on response latency", "authors": [ { "first": "N", "middle": [], "last": "Perry", "suffix": "" }, { "first": "", "middle": [], "last": "Halkitis", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Perry N Halkitis et al. 1996. Estimating testing time: The effects of item characteristics on response la- tency.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Question difficulty prediction for reading problems in standard tests", "authors": [ { "first": "Zhenya", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hongke", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mingyong", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Su", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "1352--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenya Huang, Qi Liu, Enhong Chen, Hongke Zhao, Mingyong Gao, Si Wei, Yu Su, and Guoping Hu. 2017. Question difficulty prediction for reading problems in standard tests. In AAAI, pages 1352- 1359.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generation and mining of medical, case-based multiple choice questions", "authors": [ { "first": "Ghader", "middle": [], "last": "Kurdi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ghader Kurdi. 2020. Generation and mining of med- ical, case-based multiple choice questions. Ph.D. thesis, PhD thesis, University of Manchester.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A systematic review of automatic question generation for educational purposes", "authors": [ { "first": "Ghader", "middle": [], "last": "Kurdi", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Leo", "suffix": "" }, { "first": "Bijan", "middle": [], "last": "Parsia", "suffix": "" }, { "first": "Uli", "middle": [], "last": "Sattler", "suffix": "" }, { "first": "Salam", "middle": [], "last": "Al-Emari", "suffix": "" } ], "year": 2020, "venue": "International Journal of Artificial Intelligence in Education", "volume": "30", "issue": "1", "pages": "121--204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ghader Kurdi, Jared Leo, Bijan Parsia, Uli Sattler, and Salam Al-Emari. 2020. A systematic review of auto- matic question generation for educational purposes. International Journal of Artificial Intelligence in Ed- ucation, 30(1):121-204.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Word frequencies in written and spoken English: Based on the British National Corpus", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Leech", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Leech, Paul Rayson, et al. 2014. Word fre- quencies in written and spoken English: Based on the British National Corpus. Routledge.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Textual complexity as a predictor of difficulty of listening items in language proficiency tests", "authors": [ { "first": "Anastassia", "middle": [], "last": "Loukina", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Su-Youn Yoon", "suffix": "" }, { "first": "Youhua", "middle": [], "last": "Sakano", "suffix": "" }, { "first": "Kathy", "middle": [], "last": "Wei", "suffix": "" }, { "first": "", "middle": [], "last": "Sheehan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "3245--3253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastassia Loukina, Su-Youn Yoon, Jennifer Sakano, Youhua Wei, and Kathy Sheehan. 2016. Textual complexity as a predictor of difficulty of listening items in language proficiency tests. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers, pages 3245-3253.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Manning", "suffix": "" }, { "first": "John", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Bauer", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "David", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "ACL (System Demonstrations)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In ACL (System Demon- strations), pages 55-60.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Language based mapping of science assessment items to skills", "authors": [ { "first": "Farah", "middle": [], "last": "Nadeem", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "319--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farah Nadeem and Mari Ostendorf. 2017. Language based mapping of science assessment items to skills. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 319-326.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Question difficulty-how to estimate without norming, how to use for automated grading", "authors": [ { "first": "Ulrike", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ulrike Pad\u00f3. 2017. Question difficulty-how to esti- mate without norming, how to use for automated grading. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Ap- plications, pages 1-10.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Response latency: An investigation into determinants of item-level timing", "authors": [ { "first": "G", "middle": [], "last": "Cynthia", "suffix": "" }, { "first": "", "middle": [], "last": "Parshall", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia G Parshall et al. 1994. Response latency: An investigation into determinants of item-level timing.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Scikit-learn: Machine learning in python. the", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Dubourg", "suffix": "" } ], "year": 2011, "venue": "Journal of machine Learning research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The umls metathesaurus: representing different views of biomedical concepts", "authors": [ { "first": "L", "middle": [], "last": "Peri", "suffix": "" }, { "first": "", "middle": [], "last": "Schuyler", "suffix": "" }, { "first": "T", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Hole", "suffix": "" }, { "first": "S", "middle": [], "last": "Mark", "suffix": "" }, { "first": "David", "middle": [ "D" ], "last": "Tuttle", "suffix": "" }, { "first": "", "middle": [], "last": "Sherertz", "suffix": "" } ], "year": 1993, "venue": "Bulletin of the Medical Library Association", "volume": "81", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peri L Schuyler, William T Hole, Mark S Tuttle, and David D Sherertz. 1993. The umls metathe- saurus: representing different views of biomedical concepts. Bulletin of the Medical Library Associa- tion, 81(2):217.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An exploratory analysis of item parameters and characteristics that influence item level response time", "authors": [ { "first": "Russell Winsor", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Russell Winsor Smith. 2000. An exploratory analysis of item parameters and characteristics that influence item level response time.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Relationships among item characteristics, examine characteristics, and response times on usmle step 1", "authors": [ { "first": "Susan", "middle": [ "M" ], "last": "David B Swanson", "suffix": "" }, { "first": "", "middle": [], "last": "Case", "suffix": "" }, { "first": "Brian", "middle": [ "E" ], "last": "Douglas R Ripkey", "suffix": "" }, { "first": "Matthew C", "middle": [], "last": "Clauser", "suffix": "" }, { "first": "", "middle": [], "last": "Holtman", "suffix": "" } ], "year": 2001, "venue": "Academic Medicine", "volume": "76", "issue": "10", "pages": "114--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "David B Swanson, Susan M Case, Douglas R Ripkey, Brian E Clauser, and Matthew C Holtman. 2001. Relationships among item characteristics, examine characteristics, and response times on usmle step 1. Academic Medicine, 76(10):S114-S116.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Constrained k-means clustering with background knowledge", "authors": [ { "first": "Kiri", "middle": [], "last": "Wagstaff", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schr\u00f6dl", "suffix": "" } ], "year": 2001, "venue": "Icml", "volume": "1", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiri Wagstaff, Claire Cardie, Seth Rogers, Stefan Schr\u00f6dl, et al. 2001. Constrained k-means cluster- ing with background knowledge. In Icml, volume 1, pages 577-584.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Predicting the difficulty and response time of multiple choice questions using transfer learning", "authors": [ { "first": "Victoria", "middle": [], "last": "Kang Xue", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Yaneva", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Runyon", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "193--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kang Xue, Victoria Yaneva, Christopher Runyon, and Peter Baldwin. 2020. Predicting the difficulty and response time of multiple choice questions using transfer learning. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 193-197.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Predicting item survival for multiple choice questions in a high-stakes medical exam", "authors": [ { "first": "Victoria", "middle": [], "last": "Yaneva", "suffix": "" }, { "first": "Le", "middle": [ "An" ], "last": "Ha", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Janet", "middle": [], "last": "Mee", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "6812--6818", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victoria Yaneva, Le An Ha, Peter Baldwin, and Janet Mee. 2020. Predicting item survival for multiple choice questions in a high-stakes medical exam. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6812-6818.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Class assignment by p-value and response time for each item. Note that discrepant items were excluded, as illustrated by the gap between the two class distributions.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "Distributions and median values for the top eight features by group.", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "Error distribution for the two classes", "num": null, "uris": null }, "TABREF0": { "content": "
GroupNSummary of featuresResources
Lexical5Word Count, Content word count, Content word count without stop-
words, Average word length in syllables, Complex word count
Syntactic29POS count, StanfordNLP
Parser (Manning
et al., 2014)
Semantic11Polysemic word count, Average senses for: content words, nouns,WordNet (Miller,
verbs, adjectives, auxiliary verbs, adverbs; Average noun/verb distance1995)
to WordNet root, Average noun-and-verb distance to WordNet root,
Answer words in WordNet ratio
Readability 7Flesch Reading Ease, Flesch-Kincaid grade level, Automated Readabil-See Dubay (2004)
ity Index, Gunning Fog, Coleman Liau, SMOG, SMOG Indexfor definitions
Cognitive14Absolute values, ratios, and ratings for Concreteness, Imageability,MRC Psycholin-
Familiarity, Age of acquisition, Meaningfulness (Colorado norms),guistic Database
Meaningfulness (Paivio norms)(Coltheart, 1981)
Frequency10Average frequency (relative, absolute and rank) for all words andBritishNational
for content words; Threshold frequencies for words not in the firstCorpus(Leech
2,000/3,000/4,000/5,000 most common wordset al., 2014)
Cohesion5Counts of Temporal, Causal, Additive connectives and All connectives;
Referential pronoun count
using Metamap (Aronson,
2001). The number of UMLS terms that appear in
an item may indicate the amount of medical content
", "type_str": "table", "text": "Phrase count (for each POS), Type count, Comma count, Average phrase length, Negation, Type-token ratio, Average sentence length, Average depth of tree, Clause count (relative, conditional), Average number of words before the main verb, Passive-active ratio, Proportion active VPs, Proportion passive VPs, Agentless passive count", "num": null, "html": null }, "TABREF1": { "content": "", "type_str": "table", "text": "", "num": null, "html": null }, "TABREF3": { "content": "
", "type_str": "table", "text": "Weighted F1 scores for different models on the test set", "num": null, "html": null }, "TABREF5": { "content": "
", "type_str": "table", "text": "Confusion matrix for the results from the selected features model using random forests (F1 = 0.66)", "num": null, "html": null } } } }