{ "paper_id": "Y14-1015", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:44:43.831972Z" }, "title": "Machine-Guided Solution to Mathematical Word Problems", "authors": [ { "first": "Bussaba", "middle": [], "last": "Amnueypornsakul", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois", "location": { "settlement": "Urbana-Champaign", "country": "USA" } }, "email": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois Urbana-Champaign", "location": { "country": "USA" } }, "email": "spbhat2@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Mathematical word problems (MWP) test critical aspects of reading comprehension in conjunction with generating a solution that agrees with the \"story\" in the problem. In this paper we design and construct an MWP solver in a systematic manner, as a step towards enabling comprehension in mathematics and teaching problem solving for children in the elementary grades. We do this by (a) identifying the discourse structure of MWPs that will enable comprehension in mathematics, and (b) utilizing the information in the discourse structure towards generating the solution in a systematic manner. We build a multistage software prototype that predicts the problem type, identifies the function of sentences in each problem, and extracts the necessary information from the question to generate the corresponding mathematical equation. Our prototype has an accuracy of 86% on a large corpus of MWPs of three problem types from elementary grade mathematics curriculum.", "pdf_parse": { "paper_id": "Y14-1015", "_pdf_hash": "", "abstract": [ { "text": "Mathematical word problems (MWP) test critical aspects of reading comprehension in conjunction with generating a solution that agrees with the \"story\" in the problem. In this paper we design and construct an MWP solver in a systematic manner, as a step towards enabling comprehension in mathematics and teaching problem solving for children in the elementary grades. We do this by (a) identifying the discourse structure of MWPs that will enable comprehension in mathematics, and (b) utilizing the information in the discourse structure towards generating the solution in a systematic manner. We build a multistage software prototype that predicts the problem type, identifies the function of sentences in each problem, and extracts the necessary information from the question to generate the corresponding mathematical equation. Our prototype has an accuracy of 86% on a large corpus of MWPs of three problem types from elementary grade mathematics curriculum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Mathematical word problems (MWP) constitute an integral part of a child's elementary schooling curriculum. Solving an MWP is a complex task involving critical aspects of reading comprehension (understanding the components of the problem), and generating a solution that agrees with the 'story' in the problem. Children are trained through the process of problem solving by the use of various strategies. In this study, we formulate solving an MWP as an NLP task involving text classification, discourse processing and information extraction. Our primary goal is to guide young learners through the important steps of mathematics comprehension and problem solving of arithmetic word problems commonly encountered in the elementary grades. We take a bottom-up approach, identifying the discourse structure of the MWP and then utilizing the semantic information contained in the components of the problem to generate a solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In an MWP, significant background information is presented in text format. The ability to solve an MWP critically depends on the ability to detect the problem type and identify the components of the word problem as observed in studies in mathematics education and cognitive psychology (De Corte and Verschaffel, 1987; Cummins, 1991; Verschaffel et al., 2000) .", "cite_spans": [ { "start": 299, "end": 317, "text": "Verschaffel, 1987;", "ref_id": "BIBREF7" }, { "start": 318, "end": 332, "text": "Cummins, 1991;", "ref_id": "BIBREF6" }, { "start": 333, "end": 358, "text": "Verschaffel et al., 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by these studies, we divide the overall problem solving process into stages: predicting the problem type, identification of the function of sentences (or sentence type) in each problem, and extracting the necessary information from the question to generate the corresponding mathematical equation. Since classification of the problem and sentence types involves a decision based on the textual representation, the classification tasks can be viewed as automatic text categorization problems (Yang and Liu, 1999) with domain-specific feature engineering. More broadly, a knowledge of the discourse structure of an MWP provides the human solver with a critical first step for information extraction and text summarization needed for mathematics problem comprehension and solving.", "cite_spans": [ { "start": 501, "end": 521, "text": "(Yang and Liu, 1999)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A text classification perspective to MWP solu-PACLIC 28 ! 112 tion calls for an approach different from routine text classification methods. Surface word statistics and a keyword spotting approach, that convey topicality, for instance, are insufficient to derive necessary information about problem type or document structure owing to the short document lengths of MWP. Stop word removal and stemming, two common preprocessing steps in text classification by topic, have been observed to negatively impact classification of problem types (Cetintas et al., 2009) . Thus, feature engineering that leverages the natural language properties of word problems not only at a sentence level but also at a problem level is an important novelty in this study as we explore the usefulness of a text classification approach to solving MWPs. In addition, our study is novel in adopting the multistage approach to solving word problems automatically. Specifically, this paper makes the following contributions.", "cite_spans": [ { "start": 538, "end": 561, "text": "(Cetintas et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "wards automatically identifying the information structure of MWPs, we show empirically that an ensemble classifier yields the best performance for identifying the problem type and for identifying the discourse structure of MWP. Not only are the performance gains over the baseline vastly substantial, but the performance gains of the solver when compared with stateof-the-art MWP solvers such as WolframAlpha (Barendse, 2012) are also substantial.", "cite_spans": [ { "start": 409, "end": 425, "text": "(Barendse, 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Taking a text classification approach to-", "sec_num": "1." }, { "text": "2. We demonstrate the efficacy of our software prototype to solving MWPs automatically. The multistage approach can be construed as a careful combination of inductive inference (statistical methods) and deductive inference (rulebased approach) to reflect the key aspects of mathematics comprehension in arithmetic problem solving as pointed out in psychology studies: The use of natural language to identify the discourse structure and a set of rules to derive the corresponding mathematical form (De Corte and Verschaffel, 1987; Cummins, 1991; Verschaffel et al., 2000) .", "cite_spans": [ { "start": 511, "end": 529, "text": "Verschaffel, 1987;", "ref_id": "BIBREF7" }, { "start": 530, "end": 544, "text": "Cummins, 1991;", "ref_id": "BIBREF6" }, { "start": 545, "end": 570, "text": "Verschaffel et al., 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Taking a text classification approach to-", "sec_num": "1." }, { "text": "Prior studies attempting to solve mathematical word problems in an automatic manner fall into two pri-mary categories: those intended to understand the cognitive aspects of problem solving in children and those intended for intelligent tutoring systems. Prototypical systems such as WORDPRO (Fletcher, 1985) , SOLUTION (Dellarosa, 1985) , ARITHPRO (Dellarosa, 1986) and (LeBlanc and Weber-Russell, 1996) are representations of cognitive models of human processes of mathematical word problem solving. With the exception of (LeBlanc and Weber-Russell, 1996) , these operate on propositional representations of the problem text later solved in a rulebased manner.", "cite_spans": [ { "start": 291, "end": 307, "text": "(Fletcher, 1985)", "ref_id": "BIBREF12" }, { "start": 319, "end": 336, "text": "(Dellarosa, 1985)", "ref_id": "BIBREF9" }, { "start": 348, "end": 365, "text": "(Dellarosa, 1986)", "ref_id": "BIBREF10" }, { "start": 370, "end": 403, "text": "(LeBlanc and Weber-Russell, 1996)", "ref_id": "BIBREF15" }, { "start": 536, "end": 556, "text": "Weber-Russell, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the realm of intelligent tutoring systems automatic MWP solvers were based on either using specific sentence structures and keywords (Bobrow, 1964) , or using templates (schema) limited in scope by variety and problem types - (Supap et al., 2013) for grade-level problems in Thai and (Liguda and Pfeiffer, 2011; Liguda and Pfeiffer, 2012) for gradelevel problems in German.", "cite_spans": [ { "start": 136, "end": 150, "text": "(Bobrow, 1964)", "ref_id": "BIBREF1" }, { "start": 229, "end": 249, "text": "(Supap et al., 2013)", "ref_id": "BIBREF24" }, { "start": 287, "end": 314, "text": "(Liguda and Pfeiffer, 2011;", "ref_id": "BIBREF16" }, { "start": 315, "end": 341, "text": "Liguda and Pfeiffer, 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "An early approach to automatic classification of MWP using natural language processing methods was (Cetintas et al., 2009) . The study pointed out that certain problem types (such as the multiplicative compare and equal group) were characterized by their lexical content and that a blind text categorization approach via stop word removal and stemming failed to help the classification task for those problem types. Another related study (Cetintas et al., 2010) , addresses sentence-level classification of sentences in MWP into relevant and irrelevant sentences to identify the information-bearing components of the problem.", "cite_spans": [ { "start": 99, "end": 122, "text": "(Cetintas et al., 2009)", "ref_id": "BIBREF3" }, { "start": 438, "end": 461, "text": "(Cetintas et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A more recent study in a related area is (Matsuzaki et al., 2013) , which aims at understanding the complexity of MWPs encountered by students appearing for a Japanese university entrance examination. It includes and end-to-end method of problem solving by transforming the question sentences into their logic representation to be eventually solved by an automatic solver. The problems considered are significantly more complex than grade-level arithmetic problems. A semantic parser used on the related topic of learning to solve algebra word problems is the material of (Kushman et al., 2014) . In all these studies the goal was to arrive at a solution automatically without paying attention to the stepby-step approach to assisted problem solving which PACLIC 28", "cite_spans": [ { "start": 41, "end": 65, "text": "(Matsuzaki et al., 2013)", "ref_id": "BIBREF20" }, { "start": 572, "end": 594, "text": "(Kushman et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "is what we address in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "! 113", "sec_num": null }, { "text": "Taking a view different from that of prior studies, our focus here is two-fold: first, inspired by the approach to identify the structure of scientific abstracts in (Guo et al., 2010) , we would like to gain a fundamental understanding of the discourse structure of an MWP which serves as its informationbearing component; second, knowing the structure of an MWP we would like to discover the interrelation between available units of information and eventually solve the problem.", "cite_spans": [ { "start": 165, "end": 183, "text": "(Guo et al., 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "! 113", "sec_num": null }, { "text": "Our approach in this study is closely related to that in (Supap et al., 2013) in spirit, but instead of a top-down approach via having a static template for each problem type, we resort to constructing dynamic templates in a bottom-up fashion using information on problem types and associated discourse structure. The classification algorithm leverages natural language properties at the sentence level as well as across sentence boundaries.", "cite_spans": [ { "start": 57, "end": 77, "text": "(Supap et al., 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "! 113", "sec_num": null }, { "text": "For the classifiers we use a combination of a deductive learner driven by inductive learners which has been very successful in other domains such as electronic design automation tools (Chaganty et al., 2013; Liu et al., 2012) . The cognitive modeling perspective to solving MWP in children renders the inductive-deductive learner combination a natural choice for our study.", "cite_spans": [ { "start": 184, "end": 207, "text": "(Chaganty et al., 2013;", "ref_id": "BIBREF5" }, { "start": 208, "end": 225, "text": "Liu et al., 2012)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "! 113", "sec_num": null }, { "text": "Our approach to solving an MWP is grounded in harnessing the information available in the discourse structure of the word problem. We hypothesize that classification of the problem type is a crucial first step. After knowing the problem type, we focus on the solution by identifying the components of the problem and their interrelation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "MWPs have the information to solve them embedded in text rather than in an equation. While recognizing that there are several categories of word problems, we consider for our study the set of word problems considered in a cognitively guided instruction scheme (CGI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "The CGI framework aims at developing a child's mathematical thinking via intuitive strategies for problem solving (Carpenter et al., 2000) . Focusing on the curriculum of the cognitively guided instruction scheme, this study aims to solve all three problem types at the elementary grade level: problems of the type join and separate, compare and part-partwhole involving only one mathematical operationthat of addition or subtraction.", "cite_spans": [ { "start": 114, "end": 138, "text": "(Carpenter et al., 2000)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "The choice of these problem types is motivated by early developmental theories in children's arithmetic competencies that focus on word problems classified into natural classes based on their semantic structures, the relation between the sets in the problem statement. (LeBlanc and Weber-Russell, 1996) .", "cite_spans": [ { "start": 269, "end": 302, "text": "(LeBlanc and Weber-Russell, 1996)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "The word problems considered here constitute the major types proposed by the CGI curriculum. The problem types are general in that they do not call for a specific arithmetic operation but we have restricted our approach to only those involving addition and subtraction. Although details of the exact proportion of these word problem types in the respective grade levels is not available, we expect word problems of the types considered here to be prevalent in grades Kindergarten to fourth grade (as evidenced from the collected corpus of sample practice problems).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Join and separate (J-S) problems have three main functional types of sentences in a question: given, change and result. A Given sentence is a narrative sentence where a quantity is given; a Change sentence indicates that there are some changes to the quantity in the Given sentence and the Result sentence is the result of the change applied to the given quantity. A sentence that is not of the above functional types is an Unknown sentence. When the change applied to the given quantity results in a decrease, the problem is of the separate kind (subtraction) and when the result is an increase in the given quantity, the problem is of the join kind (addition). Problems of this type are characterized by significant action language that describe changes in the possession or condition of objects. As an example consider a problem of the type separate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Henry is walking dogs for money. There are 7 dogs to walk on Henry's street. Henry walked 4 of them. How many dogs does Henry have left to walk?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Note : The yellow highlight is the given sentence. The blue highlight is the change sentence and the pink highlight is the result sentence of the example problem. The remaining sentences are of the type unknown sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Equation: 7 -x = 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "Part-part-whole (PPW) is the second problem type which contains two main functional types of sentences: part and whole. The part sentence indicates the quantity of a set, while the whole sentence indicates the total amount in a category that subsumes the set. Problems of this type involve static descriptions of the counts of two or more disjoint subsets and the union of those sets and do not contain significant actions. For example, Some kids are playing in a playground. 3 boys are playing on the slide. 4 girls are playing on the merry-goround. How many kids are there in the playground?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "Note : The yellow highlight is the part sentence. The blue highlight is the whole sentence. The rest of the question is the unknown sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "Equation: 3 + 4 = x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "The simplest of the three types, compare problems (C) involve a comparison of the counts of two sets. For example, Angela has 6 mittens. Jordan has 4 more mittens than Angela. How many mittens does Jordan have?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "It is important to note that in a given problem, the missing quantity could be in the Given, Change or Result sentence (likewise in the part or the whole sentence). It is also crucial to remember that although the equations corresponding to the problem types are similar, our focus is not just the solution but also the steps leading to the solution. The dataset used in our study is a set of sample problems from the South Dakota Counts (Olson et al., 2008) and teacherweb.com (Ebner, 2011) . A brief description of the problems of each type and their characteristics in the corpus is summarized in Table 1 : Corpus description of the set of problems studied. The problems were grouped by problem type at the source. However, their sentence type annotations were not available. The problems in the dataset were manually annotated for sentence functional type (Given, Change, Result, Part and Whole) and sign (join or separate) by the researchers. The annotators agreed on 99.4% of the sentence function types.", "cite_spans": [ { "start": 438, "end": 458, "text": "(Olson et al., 2008)", "ref_id": "BIBREF22" }, { "start": 478, "end": 491, "text": "(Ebner, 2011)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 600, "end": 607, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "Notice from Table 1 that the J-S problems constitute a majority of the problem types and that these problems are also the longest in terms of average number of words per problem. Another significant feature is the number of sentences per problem. We notice that it is 3.42 for J-S problems suggesting that there are more than 3 sentences which would be the case when just the Given, Change and Result sentences are present. Again, in the case of PPW sentences, we notice that the sentences are not necessarily Part, Part and Whole, but the 'parts' may even be relegated to the same sentence.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "PACLIC 28 ! 114", "sec_num": null }, { "text": "The first stage is problem type classification. Problem type classification takes as input the entire problem divided into sentences and assigns it to one of Join-Separate, Part-Part-Whole or Compare type. Depending on the problem type, the necessary classifiers are cascaded. We divide the problem solution into a maximum of three stages depending on the problem type with a classifier for each stage, described as follows. A schematic representation of the solver is given in Figure 1 . ", "cite_spans": [], "ref_spans": [ { "start": 478, "end": 486, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Models", "sec_num": "3.2" }, { "text": "Join and separate problems are the most versatile of problems because the problem's discourse structure affords phrasing of its constituent sentences in many ways. The constituent sentences can either be separate, joined using a conjunction or could be formed as a complex sentence with the use of conditionals. Figure 2 shows a step-by-step approach to solving problems of this type. First, we classify the sentence functional type for each sentence (whether it is Given or Change or Result sentence). Then, we perform a sign prediction (whether the problem calls for addition or subtraction). The pivot sentence for this task is the Change sentence because it indicates the direction of change of the quantity in the Given sentence in terms of an effective increase or decrease.The last task is to combine the results of the first two stages and generate the corresponding equation.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 320, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Join and separate problems (JS)", "sec_num": "3.2.1" }, { "text": "This problem focuses on the relationship between nouns in each sentence of the question. There are two steps to solve this problem. The first step is to identify whether the sentence is a part sentence or a whole sentence. We then use the information from this classification to generate the equation. The flowchart of the problem is displayed in figure 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Join and separate problems (JS)", "sec_num": "3.2.1" }, { "text": "Comparison problems focus on similarities or differences between sets. By nature of its type, the problem's discourse structure is limited. This means we can generate a set of rules to convert a question to its corresponding equation. Once a problem is classified as belonging to this type in the problem type identification stage, the problem is then processed by a rule-based classifier leading to its equation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compare problems", "sec_num": "3.2.2" }, { "text": "Once the component sentence types comprising the discourse structure of the problem are identified the information in each sentence is extracted. We note that the sentence type (and hence discourse structure) plays a crucial role in this stage of information extraction. We use the NLTK toolkit (Loper and Bird, 2002) to extract the numerical quantity from each sentence.", "cite_spans": [ { "start": 295, "end": 317, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Equation generation", "sec_num": "3.2.3" }, { "text": "In the J-S equation generator, we construct an equation of the form (quantity in Given) + (quantity in Change) = Result. The quantity in the Change sentence bears the sign of the question (depending on whether it is addition or subtraction). If a sentence with no numerical information is classified as Given, Change or Result, we assign an X to that sentence and the information is excluded from the equation (a potential source of error).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Equation generation", "sec_num": "3.2.3" }, { "text": "The analog holds for the PPW equation generator. With its sentences classified as Part or Whole we proceed to the equation generation as follows. When the Part sentence has more than one numerical quantity, we assign the first number as Part1 and the other numbers as Part2 (or into more buckets as the case may be). Then, we arrange them into the corresponding equation as: Part1 + Part2 = Whole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Equation generation", "sec_num": "3.2.3" }, { "text": "In both these equation generators, when the equation has insufficient information owing to errors from previous stages (we will defer discussing some scenarios to Section 6), a solution is not generated. The generated equation is solved using Numpy (Oliphant, 2006) .", "cite_spans": [ { "start": 249, "end": 265, "text": "(Oliphant, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Equation generation", "sec_num": "3.2.3" }, { "text": "For the tasks of problem type classification, sentence type classification and sign prediction, we use the ensemble method of inductive classifier -Random Forest. The equation generation stage is a rule-based deductive learner that combines the result of sentence type classification (and sign prediction for the J-S problems) to derive the numerical quantities needed for the equation. We use the scikit implementation of Random Forest (Pedregosa et al., 2011) .", "cite_spans": [ { "start": 437, "end": 461, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation", "sec_num": "3.3" }, { "text": "We evaluate the performance of the classifiers on problem type classification, sentence type, sign prediction and overall solution generation by the level of accuracy (how exact the classification is) calculated using 5-fold cross validation. In addition to evaluating a classifier's performance on each task, we also evaluate the contribution of each feature class to the classification by noting the accuracy of the classifier when that feature class is excluded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.4" }, { "text": "We first consider the preprocessing steps and the features considered before delving into the models by type of mathematical word problem being solved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "We employed Python NLTK (Loper and Bird, 2002) to segment the problems into sentences, perform tokenization, convert words into lower case, tag the words with their Penn treebank part-of-speech tags and lemmatize all the verbs and nouns. We also obtain the dependency parse of the sentences using the Stanford parser (De Marneffe et al., 2006) .", "cite_spans": [ { "start": 24, "end": 46, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF19" }, { "start": 321, "end": 343, "text": "Marneffe et al., 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "We use four classes of features that we describe below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "4.2" }, { "text": "Problem-level features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "\u2022 The features in this class are length-related and document-related. The length of the problem in number of sentences is a feature that we consider at the problem level, noticing that on an average, J-S problems tend to have more sentences per problem than those of the C type, which in turn have more sentences than those of the PPW type (refer Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 347, "end": 354, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "\u2022 Structure that is specific for problem of type C which is the binary valued feature indicating the presence of comparative adjective and \"than\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "\u2022 Keywords (with binary values) extracted using tfidf constitute another type of problem-level features. To avoid overfitting, we consider only those keywords that occur at least five times in the corpus of problems. We exclude verbs and prepositions from this list. The intuition here is that keywords such as altogether characterize PPW problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "Sentence-level features: Mainly used for sentencelevel classification into types, the features in this class are positional, structural or semantic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "\u2022 Sentence position in the problem tends to be an indicator of the sentence type for PPW and JS problems. For instance, a majority of the JS sentences have the first sentence of the type Given, as a manner of discourse structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "\u2022 Structural features essentially capture shared relationships between entities in a sentence, such as that between the subject and object in a sentence obtained in the form of dependency relations. Other structural features are verb phrase (binary valued) such as to start with, comparative structure such as more than (binary valued) and prepositions such as on (binary valued).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PACLIC 28 ! 116", "sec_num": null }, { "text": "We observe that problems of the J-S type are characterized by significant action language that describe changes in the possession or condition of objects. Thus, we posit that the count of unique verb lemmas will serve as a discriminating feature. Consider for instance a J-S problem, Grandma had 5 strawberries. Grandpa gave her 8 more strawberries. How many strawberries does Grandma have now? The verb from the Given sentence Grandma had 5 strawberries has changed in the Change sentence Grandpa gave her 8 more strawberries and thus the problem has 2 verb lemmas (have and give).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action-related features:", "sec_num": null }, { "text": "Entity-related features: An example of this feature is the number of unique noun phrases. Since problems of type PPW involve static descriptions of two or more disjoint subsets in the Part sentence and the union of those sets (or the super category of the entities in the Part sentence) in the While sentence, a characteristic of problems of this type is the variety of noun phrases. For instance, Jarron has 5 red triangles and 10 blue squares. How many shapes does he have altogether? The first sentence which corresponds to Part sentence contains two noun phrases: red triangles and blue squares. The other sentences is whole sentence. It has only one noun which is shapes. Here red triangles and blue squares are subcategories of shapes and so the number of unique noun phrases is 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action-related features:", "sec_num": null }, { "text": "The hyperparameters of the Random Forest classifier were tuned as follows. The corpus of problem types and sentence types were split into a training and test set via a random 80-20 split. The parameters of the random forest classifiers at the problem type, sentence type and sign prediction stages were independently tuned by 5fold cross validation on the training data set choosing the set that achieves the highest cross-validation accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter tuning", "sec_num": "4.3" }, { "text": "As a result, with n as the number of total available features the problem type prediction classifier was set to have a maximum of p n features and allowed to reach a maximum depth of 15 nodes. The sentence type classifier for J-S was set to have a maximum of n features and allowed to reach a depth of 25 nodes, whereas that for PPW had the parameters set to n and 10 respectively. The corresponding parameters for sign prediction module were log 2 n and 50.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter tuning", "sec_num": "4.3" }, { "text": "We report results of using the inductive classification in the first few stages followed by the results of the deductive classification in the equation generation stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "The majority baseline is the proportion of the largest problem class in the corpus which is about 44% We observe that problem type classification using Random Forest yielded an accuracy of 93.47% The performance of Random Forest is justified considering that many of our features are correlated. Additionally, our data falls in the realm of the 'small n, large p' scenario where Random Forest is known to perform best. We thus use only Random Forest for classification in the following stages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem type classification", "sec_num": "5.1" }, { "text": "For sentence type classification, the baseline is the majority class among sentence types since the sentences are classified independently. Thus, the baseline for J-S problems is 36.12% (majority class is Change sentence) and for PPW is 62.47% (majority class is Part sentence). Table 2 : Performance of the Random Forest classifier for sentence type classification. The improvement over the baseline is significant.", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 286, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Sentence-type classification", "sec_num": "5.2" }, { "text": "From Table 2 we notice that the ensemble classifier outperforms the baseline by a wide margin in both J-S and PPW solvers. The performance of the classifier on sentence type prediction for both types seems comparable even though one involves a 3-way classification (for J-S) and the other only two-way (for PPW).", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Sentence-type classification", "sec_num": "5.2" }, { "text": "For sign prediction, we note that the module is used only to solve problems of type J-S. Hence, the baseline is the majority class which in our case is 50% owing to the equal number of addition and subtraction problems. The accuracy of the classifier that performs sign prediction is 84.33%. This renders the sign-prediction stage a bottleneck for solving J-S problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence-type classification", "sec_num": "5.2" }, { "text": "PPW C Overall 78.67% 87.33% 94.92% 85.64% Table 3 : Comparison of the accuracy of the solvers for each problem type.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "JS", "sec_num": null }, { "text": "The overall solution is obtained by combining the result of the individual stages as per problem type to generate the corresponding equation. The accuracies of the solvers for each problem type are compared in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Overall Solution", "sec_num": "5.3" }, { "text": "We prepare a simple rule-based baseline with which we compare the results of the equation generation. First, if there is more than one numerical quantity in a sentence, they are all summed up. Any sentence without a numerical quantity is ignored and the question sentence is mapped to the variable. Second, if the number in the first sentence is larger than the number in the second sentence, the first number will be subtracted by the second number; otherwise the two numbers are added. With these two rules, we disregard the type of MWP and generate the equation. The baseline accuracy becomes 59.58% (J-S accuracy is 48%, C accuracy is 55.69%, and PPW accuracy is 87.5%). We would like to point out that a plausible reason that the baseline for PPW is higher than that of the stage-wise approach is because PPW problems' structure coincides with our rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Solution", "sec_num": "5.3" }, { "text": "This baseline is to be interpreted with some caution, however. Recalling that the purpose of the study is to guide the learner through the stages leading to generating the equation, a comparison of the results of the equation generation stage with the baseline alone is misguided. The final accuracy for solving problems of type Join-Separate is 78.67%. For problems of the PPW type, the accuracy of problem solution after the equation generation stage is 87.33% and that for the class of Compare problems is 94.92%. Based on this we remark that for the automatic solver, problems of the J-S type are the hardest to solve, and those of the Compare type are the easiest. This is justified here by noting that the sign-prediction module is a bottleneck for the J-S solver, as well as an additional classification stage compared to the other problem types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Solution", "sec_num": "5.3" }, { "text": "Pooling the results of each problem type together, we arrive at the overall accuracy of our solver to be 85.64%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Solution", "sec_num": "5.3" }, { "text": "A general purpose MWP solver is available via the publicly available WolframAlpha engine. The details of its implementation were unavailable, but we believe it to be operational from its associated blog post that elaborates its functionality and the diagrammatic solution feature of this solver) (Barendse, 2012) . We compare the accuracy of our solver with that of the solver provided by Wolfra-mAlpha 1 in the absence of other published MWP solvers for arithmetic problems that we study. Since the details of the solution process employed by WolframAlpha are not available we are only able to compare the respective performances at the level of equation generation.", "cite_spans": [ { "start": 296, "end": 312, "text": "(Barendse, 2012)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with the state-of-the-art", "sec_num": "5.4" }, { "text": "For the purpose of this comparison, we choose the test set (20% of our corpus) compare the accuracy of solutions produced by the solvers. While our MWP solver had an accuracy of 86% on the sample, the performance of Wolfram Alpha is remarkably poor. In particular, barely 9% of the problems were answered correctly, of which about 4% had an incorrect diagram associated with the solution. The vast majority of the MWPs are not solved and the results come back with the error \"Wol-framAlpha doesn't understand your query\". Surprisingly, the Wolfram Alpha system performed quite poorly on our dataset. Without the details of the WolframAlpha approach, we are unable to point to the advantages of our approach over that of the state-of-the-art. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with the state-of-the-art", "sec_num": "5.4" }, { "text": "The higher the accuracy of classification is, the better the outcome in generating equations will be. In this section, we consider some of the issues that negatively impact the classification process. The first issue involves the preprocessing steps that a MWP has to go through before passing through our analysis. This happens when the problem relates to time, money, and distance and needs quantity conversions before the arithmetic calculations (e.g. Josie has 7 pennies and 5 nickels. How much money does she have?). Another obvious class is when the problem requires world knowledge for its solution (e.g. Today is October 25th. How many days are there until Halloween?). The other case where our program fails is when a question has a complex sentence structure. e.g How many Yodas flew away from the planet in the space shuttle if 23 Yodas stayed on the planet of 30 Yodas in all? Focusing on the errors of the J-S problem solver, the majority of errors result from incorrect sign prediction, explained by the fact that this module is the bottleneck in our J-S automatic solver. The overall accuracy is slightly higher than we expect because the error from sign prediction and sentence type classification overlap. It is also the case that even though the classifier misclassifies Change and Given sentences, if the sign is correctly assigned as '+', the final equation is still correct i.e. 3 + x = 4 is the same as x + 3 = 4. Finally, the main source of error for problems of PPW type is that the problem type classifier misclassifies PPW to be JS, which leads to an incorrect solution. JS and PPW are very similar but they focus on different aspects. JS focuses on the dynamic action, while PPW captures the relationship between nouns in each sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "For problems of the Compare type, there are two sources of error. First, the rule-based classifier itself provides 94.92% because some questions need quantity conversion before being processed. For example, Joel started the paper route at 7:05. He worked for 25 minutes. When did he finish? The other is that the comparison problem is misclassified as J-S or PPW at the problem type clas-sification stage. Accounting for these errors would entail working with better classifiers that handle inter-sentence semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "To get a feel for the model's generalizability we tested on a set of problems not of the CGI type from Dadsworksheets.com 2 . On this set of 400 addition and subtraction word problems our model yielded an overall accuracy of 87%, suggesting that our method is not restricted to solving problems of the CGI type alone. Looking ahead, we are working to solve more complicated MWPs of upper elementary grades.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "It is conceivable that a multi-stage approach such as the one considered here can constitute one of the key design factors in applications involving intelligent tutoring systems for elementary mathematics education. The goal of guiding the learner to understand the steps involved in solving the problem can be met via our approach of identifying the problem types, highlighting the discourse elements (sentence types) while simultaneously helping arrive at the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "We present a multi-stage text-classification approach to solve arithmetic problems of elementary level automatically. Our approach recognizes the problem type, identifies the discourse structure and generates the corresponding equation to eventually solve the problem. This is in line with results from cognitive psychology studies in children learning to solve MWPs. With accuracies substantially higher than the baseline, we also observe that the performance gains of our solver compared with the state-of-the-art MWP solvers such as WolframAlpha are also substantial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://www.dadsworksheets.com/ accessed on March 20th,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Solving word problems with wolfram-alpha@ONLINE", "authors": [ { "first": "Peter", "middle": [], "last": "Barendse", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Barendse. 2012. Solving word prob- lems with wolfram-alpha@ONLINE, October. http://blog.wolframalpha.com/2012/10/04/solving- word-problems-with-wolframalpha/.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A question-answering system for high school algebra word problems", "authors": [ { "first": "G", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Bobrow", "suffix": "" } ], "year": 1964, "venue": "Fall Joint Computer Conference, Part I, AFIPS '64 (Fall, part I)", "volume": "", "issue": "", "pages": "591--614", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel G. Bobrow. 1964. A question-answering system for high school algebra word problems. In Proceed- ings of the October 27-29, 1964, Fall Joint Computer Conference, Part I, AFIPS '64 (Fall, part I), pages 591-614, New York, NY, USA. ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cognitively guided instruction: A research-based teacher professional development program for elementary school mathematics", "authors": [ { "first": "P", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Carpenter", "suffix": "" }, { "first": "Megan", "middle": [ "Loef" ], "last": "Fennema", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Franke", "suffix": "" }, { "first": "Susan", "middle": [ "B" ], "last": "Levi", "suffix": "" }, { "first": "", "middle": [], "last": "Empson", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas P Carpenter, Elizabeth Fennema, Megan Loef Franke, Linda Levi, and Susan B Empson. 2000. Cognitively guided instruction: A research-based teacher professional development program for elemen- tary school mathematics. research report.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic text categorization of mathematical word problems", "authors": [ { "first": "Suleyman", "middle": [], "last": "Cetintas", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" }, { "first": "Yan", "middle": [ "Ping" ], "last": "Xin", "suffix": "" }, { "first": "Dake", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Joo Young", "middle": [], "last": "Park", "suffix": "" } ], "year": 2009, "venue": "FLAIRS Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suleyman Cetintas, Luo Si, Yan Ping Xin, Dake Zhang, and Joo Young Park. 2009. Automatic text catego- rization of mathematical word problems. In FLAIRS Conference.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A joint probabilistic classification model of relevant and irrelevant sentences in mathematical word problems", "authors": [ { "first": "Suleyman", "middle": [], "last": "Cetintas", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" }, { "first": "Yan", "middle": [ "Ping" ], "last": "Xin", "suffix": "" }, { "first": "Dake", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2010, "venue": "JEDM-Journal of Educational Data Mining", "volume": "2", "issue": "1", "pages": "83--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suleyman Cetintas, Luo Si, Yan Ping Xin, Dake Zhang, Joo Young Park, and Ron Tzur. 2010. A joint proba- bilistic classification model of relevant and irrelevant sentences in mathematical word problems. JEDM- Journal of Educational Data Mining, 2(1):83-101.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Combining relational learning with smt solvers using cegar", "authors": [ { "first": "Arun", "middle": [], "last": "Chaganty", "suffix": "" }, { "first": "Akash", "middle": [], "last": "Lal", "suffix": "" }, { "first": "Aditya", "middle": [ "V" ], "last": "Nori", "suffix": "" }, { "first": "K", "middle": [], "last": "Sriram", "suffix": "" }, { "first": "", "middle": [], "last": "Rajamani", "suffix": "" } ], "year": 2013, "venue": "Computer Aided Verification", "volume": "", "issue": "", "pages": "447--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arun Chaganty, Akash Lal, Aditya V Nori, and Sriram K Rajamani. 2013. Combining relational learning with smt solvers using cegar. In Computer Aided Verifica- tion, pages 447-462. Springer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Children's interpretations of arithmetic word problems", "authors": [ { "first": "Denise Dellarosa", "middle": [], "last": "Cummins", "suffix": "" } ], "year": 1991, "venue": "Cognition and Instruction", "volume": "8", "issue": "3", "pages": "261--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denise Dellarosa Cummins. 1991. Children's interpre- tations of arithmetic word problems. Cognition and Instruction, 8(3):pp. 261-289.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The effect of semantic structure on first graders' strategies for solving addition and subtraction word problems", "authors": [ { "first": "Erik", "middle": [], "last": "De Corte", "suffix": "" }, { "first": "Lieven", "middle": [], "last": "Verschaffel", "suffix": "" } ], "year": 1987, "venue": "Journal for Research in Mathematics Education", "volume": "", "issue": "", "pages": "363--381", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik De Corte and Lieven Verschaffel. 1987. The ef- fect of semantic structure on first graders' strategies for solving addition and subtraction word problems. Journal for Research in Mathematics Education, pages 363-381.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "Marie-Catherine De", "middle": [], "last": "Marneffe", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC", "volume": "6", "issue": "", "pages": "449--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, Christo- pher D Manning, et al. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of LREC, volume 6, pages 449-454.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Solution: A computer simulation of childrens recall of arithmetic word problem solving", "authors": [ { "first": "Denise", "middle": [], "last": "Dellarosa", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "85--148", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denise Dellarosa. 1985. Solution: A computer sim- ulation of childrens recall of arithmetic word prob- lem solving. University of Colorado Technical Report, pages 85-148.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A computer simulation of children's arithmetic word-problem solving. Behavior Research Methods", "authors": [ { "first": "Denise", "middle": [], "last": "Dellarosa", "suffix": "" } ], "year": 1986, "venue": "", "volume": "18", "issue": "", "pages": "147--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denise Dellarosa. 1986. A computer simulation of chil- dren's arithmetic word-problem solving. Behavior Re- search Methods, 18:147-154, March.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Cognitively guided instruction (cgi) problem types @ONLINE", "authors": [ { "first": "Kerianne", "middle": [], "last": "Ebner", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kerianne Ebner. 2011. Cognitively guided instruction (cgi) problem types @ONLINE, July.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Understanding and solving arithmetic word problems: A computer simulation", "authors": [ { "first": "R", "middle": [], "last": "Charles", "suffix": "" }, { "first": "", "middle": [], "last": "Fletcher", "suffix": "" } ], "year": 1985, "venue": "Behavior Research Methods", "volume": "17", "issue": "5", "pages": "565--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles R. Fletcher. 1985. Understanding and solv- ing arithmetic word problems: A computer simulation. Behavior Research Methods, 17(5):565-571.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Identifying the information structure of scientific abstracts: An investigation of three different schemes", "authors": [ { "first": "Yufan", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Ilona", "middle": [], "last": "Silins Karolinska", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ulla", "middle": [], "last": "Stenius", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing, BioNLP '10", "volume": "", "issue": "", "pages": "99--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yufan Guo, Anna Korhonen, Maria Liakata, Ilona Silins Karolinska, Lin Sun, and Ulla Stenius. 2010. Identi- fying the information structure of scientific abstracts: An investigation of three different schemes. In Pro- ceedings of the 2010 Workshop on Biomedical Natu- ral Language Processing, BioNLP '10, pages 99-107, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning to automatically solve algebra word problems", "authors": [ { "first": "Nate", "middle": [], "last": "Kushman", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "271--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 271-281, Baltimore, Maryland, June. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text integration and mathematical connections: a computer model of arithmetic word problem solving", "authors": [ { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Sylvia", "middle": [], "last": "Leblanc", "suffix": "" }, { "first": "", "middle": [], "last": "Weber-Russell", "suffix": "" } ], "year": 1996, "venue": "Cognitive Science", "volume": "20", "issue": "3", "pages": "357--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark D LeBlanc and Sylvia Weber-Russell. 1996. Text integration and mathematical connections: a computer model of arithmetic word problem solving. Cognitive Science, 20(3):357-407.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A question answer system for math word problems. First International Workshop on Algorithmic Intelligence", "authors": [ { "first": "Christian", "middle": [], "last": "Liguda", "suffix": "" }, { "first": "Thies", "middle": [], "last": "Pfeiffer", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Liguda and Thies Pfeiffer. 2011. A question answer system for math word problems. First Interna- tional Workshop on Algorithmic Intelligence.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modeling math word problems with augmented semantic networks", "authors": [ { "first": "Christian", "middle": [], "last": "Liguda", "suffix": "" }, { "first": "Thies", "middle": [], "last": "Pfeiffer", "suffix": "" } ], "year": 2012, "venue": "Gosse Bouma, Ashwin Ittoo, Elisabeth Mtais, and Hans Wortmann", "volume": "7337", "issue": "", "pages": "247--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Liguda and Thies Pfeiffer. 2012. Modeling math word problems with augmented semantic net- works. In Gosse Bouma, Ashwin Ittoo, Elisabeth Mtais, and Hans Wortmann, editors, Natural Lan- guage Processing and Information Systems, volume 7337 of Lecture Notes in Computer Science, pages 247-252. Springer Berlin Heidelberg.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Word level feature discovery to enhance quality of assertion mining", "authors": [ { "first": "Lingyi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chen-Hsuan", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shobha", "middle": [], "last": "Vasudevan", "suffix": "" } ], "year": 2012, "venue": "ICCAD", "volume": "", "issue": "", "pages": "210--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lingyi Liu, Chen-Hsuan Lin, and Shobha Vasudevan. 2012. Word level feature discovery to enhance qual- ity of assertion mining. In ICCAD, pages 210-217.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Nltk: The natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", "volume": "1", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Work- shop on Effective Tools and Methodologies for Teach- ing Natural Language Processing and Computational Linguistics -Volume 1, ETMTNLP '02, pages 63-70, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The complexity of math problems -linguistic, or computational?", "authors": [ { "first": "Takuya", "middle": [], "last": "Matsuzaki", "suffix": "" }, { "first": "Hidenao", "middle": [], "last": "Iwane", "suffix": "" }, { "first": "Hirokazu", "middle": [], "last": "Anai", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Arai", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "73--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takuya Matsuzaki, Hidenao Iwane, Hirokazu Anai, and Noriko Arai. 2013. The complexity of math problems -linguistic, or computational? In Proceedings of the Sixth International Joint Conference on Natural Lan- guage Processing, pages 73-81, Nagoya, Japan, Octo- ber. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Guide to NumPy", "authors": [ { "first": "Travis", "middle": [ "E" ], "last": "Oliphant", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Travis E. Oliphant, 2006. Guide to NumPy. Provo, UT, March.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "South dakota counts: Cgi problems created by south dakota math teacher leaders @ONLINE", "authors": [ { "first": "Shawn", "middle": [], "last": "Olson", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Musser", "suffix": "" }, { "first": "Sue", "middle": [], "last": "Mcadaragh", "suffix": "" }, { "first": "Roxane", "middle": [], "last": "Dyk", "suffix": "" }, { "first": "Jonath", "middle": [], "last": "Weber", "suffix": "" }, { "first": "Tracy", "middle": [], "last": "Mittleider", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Atwood", "suffix": "" }, { "first": "Marcia", "middle": [], "last": "Torgrude", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shawn Olson, Natalie Musser, Sue McAdaragh, Roxane Dyk, Jonath Weber, Tracy Mittleider, Lucy Atwood, and Marcia Torgrude. 2008. South dakota counts: Cgi problems created by south dakota math teacher leaders @ONLINE, January.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Mathmaster: an alternative math word problems translation. Computational Approaches to Assistive Technologies for People with Disabilities", "authors": [ { "first": "Wanintorn", "middle": [], "last": "Supap", "suffix": "" }, { "first": "Kanlaya", "middle": [], "last": "Naruedomkul", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Cercone", "suffix": "" } ], "year": 2013, "venue": "", "volume": "253", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wanintorn Supap, Kanlaya Naruedomkul, and Nick Cer- cone. 2013. Mathmaster: an alternative math word problems translation. Computational Approaches to Assistive Technologies for People with Disabilities, 253:109.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Making sense of word problems", "authors": [ { "first": "Lieven", "middle": [], "last": "Verschaffel", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Greer", "suffix": "" }, { "first": "Erik", "middle": [ "De" ], "last": "Corte", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lieven Verschaffel, Brian Greer, and Erik De Corte. 2000. Making sense of word problems. Lisse.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A re-examination of text categorization methods", "authors": [ { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Liu", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "42--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 42-49. ACM.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Flow chart for the system.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Top: Flow chart for Join and Separate Problem. Bottom: part part whole Problem.", "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "text": "", "type_str": "table", "content": "
Problem typeJ-SPPWC
No. of problems330164257
No. of words/problem (mean) 25.54 22.47 21.13
No. of sentences/problem3.422.723.06
No . of verb types (total)993646
", "html": null }, "TABREF4": { "num": null, "text": "Comparison of the accuracy results (in %) with different feature classes ablated for each classification task with the accuracy where no features were excluded.Table 4summarizes the results of the ablation study 1 www.wolframalpha.com visited on June 01, 2014. for each task by removing each class of features. For problem type prediction, the action-related features constitute the most important set of features (most likely influenced by the predominance of J-S problems) followed by the problem-level features. The sentence level features seem to have little impact on the overall accuracy. Even though the entity-related features do not have an effect on PPW sentence type classification, it contributes substantially to question type classification (most likely by way of characterizing PPW). Sign prediction depends primarily on the sentence-level features but is about equally dependent on the other sets of features.", "type_str": "table", "content": "
PACLIC 28
", "html": null } } } }