{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:23.444387Z" }, "title": "Context-based Automated Scoring of Complex Mathematical Responses", "authors": [ { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETS", "location": { "postCode": "08541", "settlement": "Princeton", "region": "NJ", "country": "USA" } }, "email": "acahill@ets.org" }, { "first": "James", "middle": [ "H" ], "last": "Fife", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETS", "location": { "postCode": "08541", "settlement": "Princeton", "region": "NJ", "country": "USA" } }, "email": "" }, { "first": "Brian", "middle": [], "last": "Riordan", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETS", "location": { "postCode": "08541", "settlement": "Princeton", "region": "NJ", "country": "USA" } }, "email": "" }, { "first": "Avijit", "middle": [], "last": "Vajpayee", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETS", "location": { "postCode": "08541", "settlement": "Princeton", "region": "NJ", "country": "USA" } }, "email": "" }, { "first": "Dmytro", "middle": [], "last": "Galochkin", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETS", "location": { "postCode": "08541", "settlement": "Princeton", "region": "NJ", "country": "USA" } }, "email": "dgalochkin@ets.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The tasks of automatically scoring either textual or algebraic responses to mathematical questions have both been well-studied, albeit separately. In this paper we propose a method for automatically scoring responses that contain both text and algebraic expressions. Our method not only achieves high agreement with human raters, but also links explicitly to the scoring rubric-essentially providing explainable models and a way to potentially provide feedback to students in the future.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The tasks of automatically scoring either textual or algebraic responses to mathematical questions have both been well-studied, albeit separately. In this paper we propose a method for automatically scoring responses that contain both text and algebraic expressions. Our method not only achieves high agreement with human raters, but also links explicitly to the scoring rubric-essentially providing explainable models and a way to potentially provide feedback to students in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper we present work on automatically scoring student responses to constructed-response mathematics items where the response should contain both text and mathematical equations or expressions. Existing work on automated scoring of mathematics items has largely focused on items where either only text is required (c.f. related work on automated short-answer-scoring (Galhardi and Brancher, 2018; Burrows et al., 2015) ) or only an expression or equation is required (Drijvers, 2018; Fife, 2017; Sangwin, 2004) . This is the first work, to our knowledge, that attempts to automatically score responses that contain both.", "cite_spans": [ { "start": 375, "end": 404, "text": "(Galhardi and Brancher, 2018;", "ref_id": "BIBREF6" }, { "start": 405, "end": 426, "text": "Burrows et al., 2015)", "ref_id": "BIBREF1" }, { "start": 475, "end": 491, "text": "(Drijvers, 2018;", "ref_id": "BIBREF3" }, { "start": 492, "end": 503, "text": "Fife, 2017;", "ref_id": "BIBREF5" }, { "start": 504, "end": 518, "text": "Sangwin, 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Items that elicit such responses could be algebra, trigonometry, or calculus items that ask the student to solve a problem and/or provide an argument. Items at levels much below algebra most likely would not require the student to include an equation -at least one that requires an equation editor for proper entry -in the text, and items at a higher level might require the student to include abstract mathematical expressions that would themselves present automated scoring difficulties. These kinds of items are quite common on paper-and-pencil algebra exams. However, they are less common on computer-delivered exams, primarily because the technology of calling up an equation editor to insert equations in text is new and not generally used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The challenge with automatically scoring these kinds of responses, in a construct-valid way, is that the system needs to be able to interpret the correctness of the equations and expressions in the context of the surrounding text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is not just to achieve accurate scoring but to also have explainable models. Explainable models have a number of advantages including (i) giving users evidence that the models are fair and unbiased; (ii) the ability to leverage the models for feedback; and (iii) compliance with new laws, e.g. the General Data Protection Regulation (EU) 2016/679 (GDPR) which requires transparency and accountability of any form of automated processing of personal data. In this paper we present an approach that not only achieves high agreement with human raters, but also links explicitly to the scoring rubric -essentially providing explainable models and a way to potentially provide feedback to students in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we use data from 3 pilot-study items that elicited responses containing both textual explanations as well as equations and expressions. An example item is given in Figure 1 , and a sample response (awarded 2 points on a 0-3 point scale) is given in Figure 2 . 1 The pilot was administered as part of a larger project in four high schools located in various regions of the United States. The items assumed one year of algebra and involved writing solutions to algebra problems, similar to what a student would be expected to write on a paper-based classroom test. Responses were collected digitally;", "cite_spans": [ { "start": 274, "end": 275, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 178, "end": 186, "text": "Figure 1", "ref_id": null }, { "start": 263, "end": 271, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Explain, using words and equations, how you would use the quadratic formula to find two values of x for which 195 = \u22122x 2 + 40x. You may also use the on-screen calculator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Figure 1: Sample item that elicits textual explanations as well as equations and mathematics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "x = \u221240+ \u221a 40 2 \u22124(\u22122)(\u2212195) 2(\u22122)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "To solve this you must first put your equation in standard form, which gives you y=-2x+40x-195. You then plug your a, b, and c values into the quadratic formula. To start finding your x value, you must first multiply all your values in parentheses. You must then simplify the square root you get from multiplying. With your new equation, you make two more equations, one adding your simplified square root and one subtracting it. The two answers you get from those equations are your two values of x. Figure 1 (2point response). The student has put the equation into standard form with a slight error. \u22122x 2 has become \u22122x; the student was not using the equation editor and could not type the exponent. The student does not explicitly give the values of a, b, and c, but correctly substitutes these values into the formula, so we may assume that the student has determined these values correctly. We may also assume that the student has corrected the missing exponent in the standard form. The student talks about \"two answers\" but only gives one root, however, so this response is worth 2 points. students used an interface that included an optional equation editor. The responses were captured as text, with the equations captured as MathML enclosed in tags. Two of the items involved quadratic functions, requiring the student to use the equation editor to properly format equations in their responses. Nonetheless, many students did not use the equation editor consistently. In fact only 60% of all students used the equation editor. Of all equations entered by the students, only 34% were entered via the equation editor since most of the students preferred to write simple equations as regular text. 2 There were over 1,000 responses collected for each item, however some responses were blank and therefore not included in this study. Table 1 gives some descriptive statistics for the final data used in this study. Items 2 and 3 were somewhat difficult for this pilot student population, with 71% and 78% of students receiving a score of 0 for those items. All responses were scored by two trained raters; the quadratic-weighted kappa values for the human-human agreement on the three items ranged from 0.91 to 0.95, indicating that humans were able to agree very well on the assignment of scores.", "cite_spans": [], "ref_spans": [ { "start": 501, "end": 509, "text": "Figure 1", "ref_id": null }, { "start": 1848, "end": 1855, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "3 Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "We use m-rater, an automated scoring engine developed by Educational Testing Service (Fife, 2013 (Fife, , 2017 to automatically score the equations and mathematical expressions in our data. M-rater uses SymPy 3 , an open-source computer algebra system, to determine if the student's response is mathematically equivalent to the intended response. M-rater can process standard mathematical format, with exponents, radical signs, fractions, and so forth. M-rater is a deterministic system, and as such has 100% accuracy, given well-formed input. If, as in this study, the responses consist of a mixture of text and equations or mathematical expressions, m-rater can evaluate the correctness (or partial correctness) of the equations and expressions, but it cannot evaluate text.", "cite_spans": [ { "start": 85, "end": 96, "text": "(Fife, 2013", "ref_id": "BIBREF4" }, { "start": 97, "end": 110, "text": "(Fife, , 2017", "ref_id": "BIBREF5" }, { "start": 209, "end": 210, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Automatically scoring equations and expressions", "sec_num": "3.1" }, { "text": "While the students had access to an equation editor as part of the delivery platform, many did not use it consistently. This means that we cannot rely on the MathML encoding to identify all of the equations and mathematical expressions in the text. For example, a student may have wanted to enter the equation: 2x 2 \u2212 40x + 195 = 0. They may use the equation editor to enter the entire equation, or some of it (e.g. the piece after the = sign, or after the exponent expression), or none of it. This leads to construct-irrelevant variations in representations. Therefore, we develop a regular-expression based system for automatically identifying equations and expressions in responses where all data from the equation editor has been rendered as plain text. Our processing includes the following assumptions which are appropriate for our dataset:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "\u2022 Variables can only consist of single letters;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "\u2022 We only detect simple functions (square root, absolute and very basic trigonometric functions);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "\u2022 Equations containing line breaks are treated as two different equations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "We processed all responses to the three pilot items with this script and all identified equations and expressions were manually checked by a content expert. In almost all cases, the system correctly identified the equations or expressions. There were 9 incorrectly identified equations in total (out of 2,672). Mis-identifications were usually due to incorrect spacing in the equation -either too much space between characters in the equation or no space between the equation and subsequent text. A few students used the letter x to denote multiplication, which was read by the system as the variable x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "It is possible to convert the m-rater evaluations of the individual equations and expressions contained in a response into features. This is done by automatically extracting the equations and expressions and using m-rater to match each one to an element in the scoring rubric (also called concepts). These features encode a binary indicator of whether a particular concept is present or not in a response. Note that some concepts represent known or expected misconceptions in student responses. For example, the set of six binary features instantiated for each response to Item 2 are as follows: (i) has the equation been correctly transformed into standard form (rubric element 1); (ii) did the student answer a=2 (rubric element 2); (iii) did the student answer b=40 (rubric element 2); (iv) did the student answer c=195 (rubric element 2); (v) did the student find solution 1 (rubric element 3); (vi) did the student find solution 2 (rubric element 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically identifying equations and expressions in text", "sec_num": "3.2" }, { "text": "We use 4 approaches for automatically scoring short texts with mathematical expressions. , 2018) . The output of the encoder is aggregated in a fully-connected feedforward layer with sigmoid activation to predict the score of the response. This architecture has achieved state-of-the-art performance on the ASAP-SAS benchmark dataset (Riordan et al., 2019). Additional information about steps to replicate the system can be found in the Appendix.", "cite_spans": [ { "start": 89, "end": 96, "text": ", 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Automatically scoring short texts for correctness", "sec_num": "3.3" }, { "text": "We conduct a set of experiments to answer the following research questions: we use all 4 systems as described in Section 3.3. Subsequently, we perform 3 experiments where all expressions and equations (as identified by m-rater) are converted to pre-defined tokens with increasing degree of explainability:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Exp 1 All equations and expressions automatically identified and converted to a single token (@expression@)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Exp 2 All equations and expressions automatically identified and converted to one of @correct@ or @incorrect@. The correctness of an equation is determined automatically by matching against the scoring rubric using m-rater (see Section 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Exp 3 All equations and expressions automatically identified and converted to one of @correct N@ or @incorrect@, where N indicates the set of concept numbers from the scoring rubric and is automatically identified using m-rater.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For each pair of system and response variant, we conduct a 10-fold nested cross validation experiment. We split our data into 80% train, 10% dev and 10% test. For each fold, we train on the train+dev portions and make predictions on the held-out test portion, having tuned the hyperparameters on the dev set. There are no overlapping test folds. For evaluation, we pool predictions on test sets from all folds and compute agreement statistics between the rater 1 score and the machine predictions. Table 2 gives the results of all models used for the baseline experiment where all responses are converted to plain text. Even without pre-processing the mathematical expressions, textual context is very important, as we see by the poor performance of the Linear Regression model on purely mathematical features (LinReg m ). It can also be seen that character level features, while partially capturing mathematical expressions, do not perform as well as the SVR model with explicit math features (comparing SV R csw to SV R msw ). The difference, however, is not statistically significant for any item (details given in Appendix A.3). Another interesting result is that the RNN model without character level OR explicit math information performs well, being a close second to the SVR msw model and the differences between them are not statistically significant. Table 3 gives the results for the explainability experiments i.e. Exp 1 to 3 where mathematical expressions and equations were pre-identified and replaced in the response text. Comparing these with the results for the experiment on the original text responses (Table 2) , it can be seen that the replacement that includes the mappings to rubric concepts (Exp 3) not only increases explainability but is also competitive in performance to models with explicit math features but no expression replacement (outperforming them on Item 1). Models SVR csw and WordRNN are not significantly different on any item for any of the 3 explainability experiments (Exp 1 to 3) .", "cite_spans": [], "ref_spans": [ { "start": 498, "end": 505, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1360, "end": 1367, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1620, "end": 1629, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 2010, "end": 2022, "text": "(Exp 1 to 3)", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Coming back to our original research questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "1. How important is textual context for responses involving mathematical expressions with respect to automated scoring? Context is important for automatically scoring responses that integrate text and algebraic information. Evaluating the mathematical expressions alone does not perform well (Exp 0). Additionally, Exp 1 has no context for the mathematical expressions, and we see lower results for the system that still includes mathematical information as independent features, but out of context (SV R msw ), compared to systems that encode the mathematical information in some way in context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Character level features certainly do capture a large portion of mathematical expressions. We see that in the Exp 0 results, where there is no interpretation of the mathematical expressions, that systems perform almost as well as the systems that do explicit interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do character level features capture mathematical expressions?", "sec_num": "2." }, { "text": "3. Can explainability be included in scoring models without severely compromising accuracy?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do character level features capture mathematical expressions?", "sec_num": "2." }, { "text": "Yes, we can include model interpretability without compromising scoring accuracy. The differences between the best models from Exp 0 and Exp3 ranged from -0.004 to +0.041). By explicitly linking aspects of the rubric to each response, we yield interpretable models that perform comparably to systems without this interpretative layer. Although the overall results are lower, they are not statistically significantly lower.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do character level features capture mathematical expressions?", "sec_num": "2." }, { "text": "To summarize, this work presented a hybrid scoring model using a deterministic system for evaluating the correctness (or partial correctness) of mathematical equations, in combination with text-based automated scoring systems for evaluating the appropriateness of the textual explanation of a response.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We contribute the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "1. Systems that produce extremely high agreement between an automated system and human raters for the task of automatically scoring items that elicit both textual and algebraic components 2. A method for linking rubric information to the automated scoring system, resulting in an more interpretable model than one based purely on the raw response to mean both solutions. Or students may write that the two values of x are x = 11.5811. . . and x = 8.4188. . . , correct to at least one decimal place, provided they arrive at these numbers through the quadratic formula and not by solving the equation numerically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Max 2/3 for finding one correct solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 Max 2/3 for writing the two correct solutions with no explanation of where the values of a, b, and c come from.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "\u2022 1/3 if the student provides an outline of the solution without actually carrying out any of the steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The text is preprocessed with the spaCy tokenizer with some minor postprocessing to correct tokenization mistakes on noisy data. On conversion to tensors, responses are padded to the same length in a batch; these padding tokens are masked out during model training. Prior to training, responses are scaled to [0, 1] to form the input to the networks. The scaled scores are converted back to their original range for evaluation. Word tokens are embedded with GloVe 100 dimension vectors and fine-tuned during training. Word tokens not in the embeddings vocabulary are each assigned a unique randomly initialized vector. The GRUs were 1 layer with a hidden state of size 250. The network was trained with mean squared error loss. We optimized the network with RMSProp with hyperparameters set as follows: learning rate of 0.001, batch size of 32, and gradient clipping set to 10.0. An exponential moving average of the model's weights is used during training (Adhikari et al., 2019) .", "cite_spans": [ { "start": 957, "end": 980, "text": "(Adhikari et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Additional information for training the RNN model", "sec_num": null }, { "text": "Although nested cross-validation gives a fairly unbiased estimate of true error as shown by Varma and Simon (2006) , we performed statistical significance testing to pair-wise compare 4 models for Exp 0: no expression replacement and 2 models for Exp 3: expressions replaced with incorrect/correct along with concept numbers. Friedman's test as suggested by Dem\u0161ar (2006) is run to compare 6 models (corresponding to treatments) across multiple repeated measures (10 folds) for each item individually. Note that such a setup of comparing multiple models across 10 folds on a dataset has to be regarded as non-independent data as even though the test folds will be distinct, the training data for each fold may partially overlap. Hence Friedman's test is appropriate here to", "cite_spans": [ { "start": 92, "end": 114, "text": "Varma and Simon (2006)", "ref_id": null }, { "start": 358, "end": 371, "text": "Dem\u0161ar (2006)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "A.3 Additional details on significance testing of results", "sec_num": null }, { "text": "This item corresponds to Item 2 in our dataset. The scoring rubric is given in Appendix A.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This presents obvious challenges for automatically scoring the mathematical components of the responses, since the first step is to even identify them (see Section 3.2 for how we address this).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.sympy.org/en/index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for their valuable comments and suggestions. We would also like to thank Michael Flor, Swapna Somasundaran and Beata Beigman-Klebanov for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " (Nemenyi, 1963) . Note that this testing is per-item and we report the fraction of times the differences were significant in table 4.", "cite_spans": [ { "start": 1, "end": 16, "text": "(Nemenyi, 1963)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Rethinking Complex Neural Network Architectures for Document Classification", "authors": [ { "first": "Ashutosh", "middle": [], "last": "Adhikari", "suffix": "" }, { "first": "Achyudh", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4046--4051", "other_ids": { "DOI": [ "10.18653/v1/N19-1408" ] }, "num": null, "urls": [], "raw_text": "Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking Complex Neural Net- work Architectures for Document Classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4046-4051, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The eras and trends of automatic short answer grading", "authors": [ { "first": "Steven", "middle": [], "last": "Burrows", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2015, "venue": "International Journal of Artificial Intelligence in Education", "volume": "25", "issue": "1", "pages": "60--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Burrows, Iryna Gurevych, and Benno Stein. 2015. The eras and trends of automatic short answer grading. International Journal of Artificial Intelli- gence in Education, 25(1):60-117.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical comparisons of classifiers over multiple data sets", "authors": [ { "first": "Janez", "middle": [], "last": "Dem\u0161ar", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine learning research", "volume": "7", "issue": "", "pages": "1--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janez Dem\u0161ar. 2006. Statistical comparisons of clas- sifiers over multiple data sets. Journal of Machine learning research, 7(Jan):1-30.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Digital assessment of mathematics: Opportunities, issues and criteria. Mesure et evaluation en\u00e9ducation", "authors": [ { "first": "Paul", "middle": [], "last": "Drijvers", "suffix": "" } ], "year": 2018, "venue": "", "volume": "41", "issue": "", "pages": "41--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Drijvers. 2018. Digital assessment of mathemat- ics: Opportunities, issues and criteria. Mesure et evaluation en\u00e9ducation, 41(1):41-66.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated scoring of mathematics tasks in the Common Core era: Enhancements to m-rater in support of CBAL TM , mathematics and the Common Core assessments", "authors": [ { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Fife", "suffix": "" } ], "year": 2013, "venue": "ETS Research Report Series", "volume": "2013", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James H Fife. 2013. Automated scoring of mathemat- ics tasks in the Common Core era: Enhancements to m-rater in support of CBAL TM , mathematics and the Common Core assessments . ETS Research Report Series, 2013(2):i-35.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The m-rater Engine: Introduction to the Automated Scoring of Mathematics Items. Research Memorandum, ETS RM-17-02", "authors": [ { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Fife", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "10--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "James H Fife. 2017. The m-rater Engine: Introduction to the Automated Scoring of Mathematics Items. Re- search Memorandum, ETS RM-17-02, pages 10-24.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Machine learning approach for automatic short answer grading: A systematic review", "authors": [ { "first": "Lucas", "middle": [], "last": "Busatta Galhardi", "suffix": "" }, { "first": "Jacques Du\u00edlio", "middle": [], "last": "Brancher", "suffix": "" } ], "year": 2018, "venue": "Ibero-American Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "380--391", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucas Busatta Galhardi and Jacques Du\u00edlio Brancher. 2018. Machine learning approach for automatic short answer grading: A systematic review. In Ibero-American Conference on Artificial Intelli- gence, pages 380-391. Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distribution-free multiple comparisons", "authors": [ { "first": "Peter", "middle": [], "last": "Nemenyi", "suffix": "" } ], "year": 1963, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Nemenyi. 1963. Distribution-free multiple com- parisons,(mimeographed).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models", "authors": [ { "first": "Brian", "middle": [], "last": "Riordan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Flor", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Pugh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 14th Workshop on Innovative Use of NLP for Building Educational Applications (BEA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Riordan, Michael Flor, and Robert Pugh. 2019. How to account for mispellings: Quantifying the benefit of character representations in neural content scoring models. In Proceedings of the 14th Work- shop on Innovative Use of NLP for Building Educa- tional Applications (BEA).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Sample response to the item in", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Rubric for Item 2 \u2022 1 pt. for writing the equation as 2x 2 \u2212 40x + 195 = 0 or \u22122x 2 + 40x \u2212 195 = 0. It's acceptable to just write the expression 2x 2 \u2212 40x+195 = 0 or \u22122x 2 +40x\u2212195 = 0. It's also acceptable to say something like \"Move 195 to the other side of the equation\" if they find the correct values for a, b, and c (with correct signs). \u2022 1 pt. for determining the values of a, b, and c. a = 2, b = 40, c = 195 OR a = 2, b = 40, c = 195 0 pts. if they mix the values up (e.g., a = 2, b = 40, c = 195). 1 pt. if they implicitly complete this step by correctly substituting the correct values for a, b, and c into the quadratic formula in the next step.", "uris": null, "type_str": "figure", "num": null }, "TABREF3": { "text": "Quadratically-weighted kappa results for Exp 0 (plain text, no expression replacement)", "type_str": "table", "num": null, "html": null, "content": "
SystemExp 1 Item 1 Item 2 Item 3 Item 1 Item 2 Item 3 Item 1 Item 2 Item 3 Exp 2 Exp 3
SVR msw0.8880.7830.8970.8910.7760.8890.8940.7810.894
SVR csw0.7880.5930.6640.8270.6890.8670.8820.7760.891
Word RNN 0.7670.6490.7250.8420.750.8870.9010.8290.888
" }, "TABREF4": { "text": "Quadratically-weighted kappa results for explainability experiments", "type_str": "table", "num": null, "html": null, "content": "" }, "TABREF6": { "text": "\u2022 1 pt. for substituting the values of a, b, and c into the quadratic formula and obtaining two solutions. Students do not need to simplify the answers. Students can write any equivalent expressions for the two values of x, including x =", "type_str": "table", "num": null, "html": null, "content": "
40+ \u221a OR x = \u221240+ 40 2 \u22124 * 2 * 195 2 * 2 \u221a and x = 40\u2212 40 2 \u22124 * \u22122 * \u2212195 \u221a 2 * \u22122 \u221240\u2212 \u221a 40 2 \u22124 * \u22122 * \u2212195 2 * \u22122 . It's also acceptable 40 2 \u22124 * 2 * 195 2 * 2 and x = \u221a 40 2 \u22124 * 2 * 195 for students to write x = 40\u00b1 2 * 2
" } } } }