|
{ |
|
"paper_id": "E17-1047", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:51:16.588876Z" |
|
}, |
|
"title": "Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems", |
|
"authors": [ |
|
{ |
|
"first": "Shyam", |
|
"middle": [], |
|
"last": "Upadhyay", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Illinois at Urbana-Champaign", |
|
"location": { |
|
"region": "IL", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Microsoft Research", |
|
"location": { |
|
"settlement": "Redmond", |
|
"region": "WA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook. Our proposal is to evaluate such solvers using derivations, which reflect how an equation system was constructed from the word problem. To accomplish this, we develop an algorithm for checking the equivalence between two derivations, and show how derivation annotations can be semi-automatically added to existing datasets. To make our experiments more comprehensive, we include the derivation annotation for DRAW-1K, a new dataset containing 1000 general algebra word problems. In our experiments, we found that the annotated derivations enable a more accurate evaluation of automatic solvers than previously used metrics. We release derivation annotations for over 2300 algebra word problems for future evaluations. Am=Bn Cm + Dn = E Costs of apple and orange are in ratio 5 : 15 at the Acme Market. Mark wanted some fruits so he buys 5 apples and 5 oranges for 100 dollars. Find cost of each.", |
|
"pdf_parse": { |
|
"paper_id": "E17-1047", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook. Our proposal is to evaluate such solvers using derivations, which reflect how an equation system was constructed from the word problem. To accomplish this, we develop an algorithm for checking the equivalence between two derivations, and show how derivation annotations can be semi-automatically added to existing datasets. To make our experiments more comprehensive, we include the derivation annotation for DRAW-1K, a new dataset containing 1000 general algebra word problems. In our experiments, we found that the annotated derivations enable a more accurate evaluation of automatic solvers than previously used metrics. We release derivation annotations for over 2300 algebra word problems for future evaluations. Am=Bn Cm + Dn = E Costs of apple and orange are in ratio 5 : 15 at the Acme Market. Mark wanted some fruits so he buys 5 apples and 5 oranges for 100 dollars. Find cost of each.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatically solving math reasoning problems is a long-pursued goal of AI (Newell et al., 1959; Bobrow, 1964) . Recent work Shi et al., 2015; Koncel-Kedziorski et al., 2015) has focused on developing solvers for algebra word problems, such as the one shown in Figure 1. Developing a solver for word problems can open several new avenues, especially for online education and intelligent tutoring systems (Kang et al., 2016) . In addition, as solving word problems requires the ability to understand and analyze natural language, it serves as a good test-bed for evaluating progress towards goals of artificial intelligence (Clark and Etzioni, 2016) . An automatic solver finds the solution of a given word problem by constructing a derivation, consisting of an un-grounded equation system 1 ({Am = Bn, Cm + Dn = E} in Figure 1 ) and alignments of numbers in the text to its coefficients (blue edges).", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 96, |
|
"text": "(Newell et al., 1959;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 110, |
|
"text": "Bobrow, 1964)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 142, |
|
"text": "Shi et al., 2015;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 174, |
|
"text": "Koncel-Kedziorski et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 423, |
|
"text": "(Kang et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 648, |
|
"text": "(Clark and Etzioni, 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 267, |
|
"text": "Figure", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 818, |
|
"end": 826, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The derivation identifies a grounded equation system {5m = 15n, 5m + 5n = 100}, whose solution can then be generated to answer the problem. A derivation precisely describes how the grounded equation system was constructed from the word problem by the automatic solver. On the other hand, the grounded equation systems and the solutions are less informative, as they do not explain which span of text aligns to the coefficients in the equations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While the derivation is clearly the most informative structure, surprisingly, no prior work evaluates automatic solvers using derivations directly. To the best of our knowledge, none of the current datasets contain human-annotated derivations, possibly due to the belief that the current evaluation metrics are sufficient and the benefit of evaluating on derivations is minor. Currently, the most popular evaluation strategy is to use solution accuracy Hosseini et al., 2014; Shi et al., 2015; Koncel-Kedziorski et al., 2015; Zhou et al., 2015; Huang et al., 2016) , which computes whether the solution was correct or not, as this is an easy-to-implement metric. Another evaluation strategy was proposed in , which finds an approximate derivation from the gold equation system and uses it to compare against a predicted derivation. We follow and call this evaluation strategy the equation accuracy. 2 In this work, we argue that evaluating solvers against human labeled derivation is important. Existing evaluation metrics, like solution accuracy are often quite generous -for example, an incorrect equation system, such as, {m + 5 = n + 15, m + n = 15 + 5}, (1) can generate the correct solution of the word problem in Figure 1 . While equation accuracy appears to be a stricter metric than solution accuracy, our experiments show that the approximation can mislead evaluation, by assigning higher scores to an inferior solver. Indeed, a correct equation system, (5m = 15n, 5m+5n = 100), can be generated by using a wrong template, Am = Bn, Am + An = C, and aligning numbers in the text to coefficients incorrectly. We show that without knowing the correct derivation at evaluation time, a solver can be awarded for the wrong reasons.", |
|
"cite_spans": [ |
|
{ |
|
"start": 453, |
|
"end": 475, |
|
"text": "Hosseini et al., 2014;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 493, |
|
"text": "Shi et al., 2015;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 525, |
|
"text": "Koncel-Kedziorski et al., 2015;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 544, |
|
"text": "Zhou et al., 2015;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 564, |
|
"text": "Huang et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 899, |
|
"end": 900, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1220, |
|
"end": 1228, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The lack of annotated derivations for word problems and no clear definition for comparing derivations present technical difficulties in using derivation for evaluation. In this paper, we address these difficulties and for the first time propose to evaluate the solvers using derivation accuracy. To summarize, the contributions of this paper are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We point out that evaluating using derivations is more precise compared to existing metrics. Moreover, contrary to popular belief, there is a meaningful gap between the derivation accuracy and existing metrics, as it can discover crucial errors not captured previously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We formally define when two derivations are equivalent, and develop an algorithm that can determine the same. The algorithm is simple 2 Note that an approximation of the derivation is necessary, as there is no annotated derivation. From the brief description in their paper and the code released by , we found that their implementation assumes that the first derivation that matches the equations and generates the correct solution is the correct reference derivation against which predicted derivations are then evaluated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "x We are mixing a solution of 32% sodium and another solution of 12% sodium. How many liters of 32% and 12% solution will produce 50 liters of a 20% sodium solution? Textual Numbers Q(x) {321, 121, 322, 122, 50, 20} Equation System y 32m + 12n = 20 * 50, m + n = 50 Solution m = 20, n = 30 to implement, and can accurately detect the equivalence even if two derivations have very different syntactic forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Problem", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Template T Am + Bn = C * D, m + n = C Coefficients C(T ) A, B, C, D Alignments A {321 \u2192 A, 121 \u2192 B, 50 \u2192 C, 20 \u2192 D} EquivTNum {[321, 322], [121, 122]} Derivation z (T, A)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Problem", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We annotated over 2300 word algebra problems 3 with detailed derivation annotations, providing high quality labeled semantic parses for evaluating word problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Problem", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We describe our notation and revisit the notion of derivation introduced in . We then formalize the notion of derivation equivalence and provide an algorithm to determine it. Table 1 shows our notation, where our proposed annotations are shown in bold. We denote a word problem by x and an equation system by y. An un-grounded equation system (or template) T is a family of equation systems parameterized by a set of coefficients C(T ) = {c i } k i=1 , where each coefficient c i aligns to a textual number (e.g., four) in the word problem. We also refer to the coefficients as slots of the template. We use (A, B, C, . . .) to represents coefficients and (m, n, . . .) to represent the unknown variables in the templates.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 182, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Derivations", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let Q(x) be the set of all the textual numbers in the problem x, and C(T ) be the coefficients to be determined in the template T . An alignment is a set of tuples A = {(q, c) | q \u2208 Q(x), c \u2208 C(T ) \u222a { }} aligning textual numbers to coefficient slots, where a tuple (q, ) indicates that the number q is not relevant to the final equation system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure of Derivation The word problem in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that there may be multiple semantically equivalent textual numbers. e.g., in Figure 1 , either of the 32 can be aligned to coefficient slot A in the template. These equivalent textual numbers are marked in the EquivTNum field in the annotation. If two textual numbers q, q \u2208 EquivTNum, then we can align a coefficient slot to either q or q , and generate a equivalent alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 90, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Structure of Derivation The word problem in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An alignment A and a template T together identify a derivation z = (T, A) of an equation system. Note that there may be multiple valid derivations, using one of the equivalent alignments. We assume there exists a routine Solve(y) that find the solution of an equation system. We use a Gaussian elimination solver for our Solve routine. We use hand-written rules and the quantity normalizer in Stanford CoreNLP (Manning et al., 2014) to identify textual numbers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 410, |
|
"end": 432, |
|
"text": "(Manning et al., 2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure of Derivation The word problem in", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We define two derivations (T 1 , A 1 ) and (T 2 , A 2 ) to be equivalent iff the corresponding templates T 1 , T 2 and alignments A 1 , A 2 are equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Intuitively, two templates T 1 , T 2 are equivalent if they can generate the same space of equation systems -i.e., for every assignment of values to slots of T 1 , there exists an assignment of values to slots of T 2 such that they generate the same equation systems. For instance, template (2) and (3) below are equivalent", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "m = A + Bn m = C \u2212 n (2) m + n = A m \u2212 Cn = B.", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "because after renaming (A, B, C) to (B, C, A) respectively in template (2), and algebraic manipulations, it is identical to template (3). We can see that any assignment of values to corresponding slots will result in the same equation system. Similarly, two alignments A 1 and A 2 are equivalent if corresponding slots from each template align to the same textual number. For the above example, the alignment {1 \u2192 A, 3 \u2192 B, 4 \u2192 C} in template (2), and alignment {1 \u2192 B, 3 \u2192 C, 4 \u2192 A} in template (3) are equivalent. Note that the alignment {1 \u2192 A, 3 \u2192 B, 4 \u2192 C} for (2) is not equivalent to {1 \u2192 A, 3 \u2192 B, 4 \u2192 C} in (3), because it does not respect variable renaming. Our definition also allows two alignments to be Algorithm 1 Evaluating Derivation Input: Predicted (Tp, Ap) and gold (Tg, Ag) derivation Output: 1 if predicted derivation is correct, 0 otherwise 1: if |C(Tp)| = |C(Tg)| then different # of coeff. slots 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "return 0 3: end if 4: \u0393 \u2190 TEMPLEQUIV(Tp,Tg) 5: if \u0393 = \u2205 then not equivalent templates 6: return 0 7: end if 8: if ALIGNEQUIV(\u0393, Ap, Ag) then Check alignments 9: return 1 10: end if 11: return 0 12: 13: procedure TEMPLEQUIV(T1, T2) 14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that here |C(T1)| = |C(T2)| holds 15:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u0393 \u2190 \u2205 16:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for each 1-to-1 mapping \u03b3 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "C(T1) \u2192 C(T2) do 17: match \u2190 True 18: for t = 1 \u2022 \u2022 \u2022 R do R : Rounds 19:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Generate random vector v 20:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A1 \u2190 {(vi \u2192 ci)},A2 \u2190 {(vi \u2192 \u03b3(ci))} 21:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if Solve(T1, A1) = Solve(T2, A2) then 22: match \u2190 False; break 23: end if 24:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end for 25:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if match then \u0393 \u2190 \u0393 \u222a {\u03b3} 26:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "end for 27:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "return \u0393 \u0393 = \u2205 iff the templates are equivalent 28: end procedure 29: 30: procedure ALIGNEQUIV(\u0393, A1, A2) 31:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for mapping \u03b3 \u2208 \u0393 do 32:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if following holds true,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(q, c) \u2208 A1 \u21d0\u21d2 {(q, \u03b3(c)) or (q , \u03b3(c))} \u2208 A2 33:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where (q , q) \u2208 EquivTNum 34:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "then return 1 35: end if 36: end for 37: return 0 38: end procedure equivalent, if they use textual numbers in equivalent positions for corresponding slots (as described by EquivTNum field).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the following, we carefully explain how template and alignment equivalence are determined algorithmically. Algorithm 1 shows the complete algorithm for comparing two derivations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Derivation Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We propose an approximate procedure TEMPLEQUIV (line 13) that detects equivalence between two templates. The procedure relies on the fact that under appropriate renaming of coefficients, two equivalent templates will generate equations which have the same solutions, for all possible coefficient assignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For two templates T 1 and T 2 , with the same number of coefficients |C(T 1 )| = |C(T 2 )|, we represent a choice of renaming coefficients by \u03b3, a 1-to-1 mapping from C(T 1 ) to C(T 2 ). The two templates are equivalent if there exists a \u03b3 such that solutions of the equations identified by T 1 and T 2 are same, for all possible coefficient assignments. The TEMPLEQUIV procedure exhaustively tries all possible renaming of coefficients (line 16), checking if the solutions of the equation systems generated from a random assignment (line 19) match exactly. It declares equivalence if for a renaming \u03b3, the solutions match for R = 10 such random assignments. 4 The procedure returns all renamings \u0393 of coefficients between two templates under which they are equivalent (line 27). We discuss its effectiveness in \u00a73.", |
|
"cite_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 660, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Alignment Equivalence The TEMPLEQUIV procedure returns every mapping \u03b3 in \u0393 under which the templates were equivalent (line 4). Recall that \u03b3 identifies corresponding slots, c and \u03b3(c), in T 1 and T 2 respectively. We describe alignment equivalence using these mappings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Two alignments A 1 and A 2 are equivalent if corresponding slots (according to \u03b3) align to the same textual number. More formally, if we find a mapping \u03b3 such that for each tuple (q, c) in A 1 there is (q, \u03b3(c)) in A 2 , then the alignments are equivalent (line 33). We allow for equivalent textual numbers (as identified by EquivTNum field) to match when comparing tuples in alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The proof of correctness of Algorithm 1 is sketched in the appendix. Using Algorithm 1, we can define derivation accuracy, to be 1 if the predicted derivation (T p , A p ) and the reference derivation (T g , A g ) are equivalent, and 0 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Properties of Derivation Accuracy By comparing derivations, we can ensure that the following errors are detected by the evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Firstly, correct solutions found using incorrect equations will be penalized, as the template used will not be equivalent to reference template. Secondly, correct equation system obtained by an incorrect template will also be penalized for the same reason. Lastly, if the solver uses the correct template to get the correct equation system, but aligns the wrong number to a slot, the alignment will not be equivalent to the reference alignment, and the solver will be penalized too.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will see some illustrative examples of above errors in \u00a75.3. Note that the currently popular evaluation metric of solution accuracy will not detect any of these error types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Template Equivalence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As none of the existing benchmarks contain derivation annotations, we decided to augment existing datasets with these annotations. We also annotated DRAW-1K, a new dataset of 1000 general algebra word problems to make our study more comprehensive. Below, we describe how we reduced annotation effort by semi-automatic generated some annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Derivations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Annotating gold derivations from scratch for all problems is time consuming. However, not all word problems require manual annotation -sometimes all numbers appearing in the equation system can be uniquely aligned to a textual number without ambiguity. For such problems, the annotations are generated automatically. 5 We identify word problems which have at least one alignment ambiguity -multiple textual numbers with the same value, which appears in the equation system. A example of such a problem is shown in Figure 1 , where there are three textual numbers with value 5, which appears in the equation system. Statistics for the number of word problems with such ambiguity is shown in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 318, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 522, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 690, |
|
"end": 697, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotating Derivations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We only ask annotators to resolve such alignment ambiguities, instead of annotating the entire derivation. If more than one alignments are genuinely correct (as in word problem of Table 1 ), we ask the annotators to mark both (using the Equiv-TNum field). This ensures our derivation annotations are exhaustive -all correct derivations are marked. With the correct alignment annotations, templates for all problems can be easily induced.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 187, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotating Derivations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Annotation Effort To estimate the effort required to annotate derivations, we timed our annotators when annotating 50 word problems (all involved alignment ambiguities). As a control, we also asked annotators to annotate the entire derivation from scratch (i.e., only provided with the word problem and equations), instead of only fixing alignment ambiguities. When annotating from scratch, annotators took an average of 4 minute per word problem, while when fixing alignment ambiguities this time dropped to average of 1 minute per word problem. We attained a inter-annotator agreement of 92% (raw percentage agreement), with most disagreements arising on EquivTNum field. 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotating Derivations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Reconciling Equivalent Templates The number of templates has been used as a measure of dataset diversity (Shi et al., 2015; Huang et al., 2016) , however prior work did not reconcile the equivalent templates in the dataset. Indeed, if two templates are equivalent, we can replace one with the other and still generate the correct equations. Therefore, after getting human judgements on alignments, we reconcile all the templates using TEMPLEQUIV as the final step of annotation. TEMPLEQUIV is quite effective (despite being approximate), reducing the number of templates by at least 20% for all datasets (Table 2) . We did not find any false positives generated by the TEMPLEQUIV in our manual examination. The reduction in Table 2 clearly indicates that equivalent templates are quite common in all datasets, and number of templates (and hence, dataset diversity) can be significantly overestimated without proper reconciliation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 123, |
|
"text": "(Shi et al., 2015;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "Huang et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 613, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 731, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotating Derivations", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We describe the three datasets used in our experiments. Statistics comparing the datasets is shown in Table 2 . In total, our experiments involve over 2300 word problems.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 109, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Alg-514 The dataset ALG-514 was introduced in . It consists of 514 general algebra word problems ranging over a variety of narrative scenarios (distance-speed, object counting, simple interest, etc.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dolphin-L DOLPHIN-L is the linear-T2 subset of the DOLPHIN dataset (Shi et al., 2015) , which focuses on number word problems -algebra word problems which describe mathematical relationships directly in the text. All word problems in the linear-T2 subset of the DOLPHIN dataset can be solved using linear equations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 85, |
|
"text": "(Shi et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Diverse Algebra Word (DRAW-1K), consists of 1000 word problems crawled from algebra.com. Details on the dataset creation can be found in the appendix. As ALG-514 was also crawled from algebra.com, we ensured that there is little overlap between the datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DRAW-1K", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We randomly split DRAW-1K into train, development and test splits with 600, 200, 200 problems respectively. We use 5-fold cross validation splits provided by the authors for DOLPHIN-L and ALG-514.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DRAW-1K", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We compare derivation accuracy against the following evaluation metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We compute solution accuracy by checking if each number in the reference solution appears in the generated solution (disregarding order), following previous work Shi et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 179, |
|
"text": "Shi et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solution Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Equation Accuracy An approximation of derivation accuracy that is similar to the one used in . We approximate the reference derivationz by randomly chosen from the (several possible) derivations which lead to the gold y from x. Derivation accuracy is computed against this (possibly incorrect) reference derivation. Note that in equation accuracy, the approximation is used instead of annotated derivation. We include the metric of equation accuracy in our evaluations to show that human annotated derivation is necessary, as approximation made by equation accuracy might be problematic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Solution Accuracy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We train a solver using a simple modeling approach inspired by and Zhou et al. (2015) . The solver operates as follows. Given a word problem, the solver ranks all templates seen during training, \u0393 train , and selects the set of the top-k (we use k = 10) templates \u03a0 \u2282 \u0393 train . Next, all possible derivations D(\u03a0) that use a template from \u03a0 are generated Setting Soln. Acc. Eqn. Acc. Deriv. Acc. Table 3 : TE and TD compared using different evaluation metrics. Note that while TD is clearly superior to TE due to extra supervision using the annotations, only derivation accuracy is able to correctly reflect the differences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 85, |
|
"text": "Zhou et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 396, |
|
"end": 403, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "and scored. The equation system\u0177 identified by highest scoring derivation\u1e91 is output as the prediction. Following (Zhou et al., 2015) , we do not model the alignment of nouns phrases to variables, allowing for tractable inference when scoring the generated derivations. The solver is trained using a structured perceptron (Collins, 2002) . We extract the following features for a (x, z) pair, Alignment Tuple Features. For two alignment tuples, (q 1 , c 1 ), (q 2 , c 2 ), we add features indicating whether c 1 and c 2 belong to the same equation in the template or share the same variable. If they belong to the same sentence, we also add lemmas of the nouns and verbs between q 1 and q 2 in x.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 133, |
|
"text": "(Zhou et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 337, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Solution Features. Features indicating if the solution of the system identified by the derivation are integer, negative, non-negative or fractional. Zhou et al., 2015) , the solver finds a derivation which agrees with the equation system and the solution, and trains on it.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 167, |
|
"text": "Zhou et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Solver", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Note that the derivation found by the solver may be incorrect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "TD (TRAIN ON DERIVATION) (x, z) pairs obtained by the derivation annotation are used as supervision. This setting trains the solver on humanlabeled derivations. Clearly, the TD setting is a more informative supervision strategy than the TE setting. TD provides the correct template and correct alignment (i.e. labeled derivation) as supervision and is expected to perform better than TE, which only provides the question-equation pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We first present the main results comparing different evaluation metrics on solvers trained using the two settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We compare the evaluation metrics in Table 3 . We want to determine to what degree each evaluation metric reflects the superiority of TD over TE.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 44, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We note that solution accuracy always exceeds derivation accuracy, as a solver can sometimes get the right solutions even with the wrong derivation. Also, solution accuracy is not as sensitive as derivation accuracy to improvements in the solver. For instance, solution accuracy only changes by 2.4 on Dolphin-L when comparing TE and TD, whereas derivation accuracy changes by 10.7 points. We found that the large gap on Dolphin-L was due to several alignment errors in the predicted derivations, which were detected by derivation accuracy. Recall that over 35% of the problems in Dolphin-L have alignment ambiguities (Table 2 ). In the TD setting, many of these errors made by our solver were corrected as the gold alignment was part of supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 618, |
|
"end": 626, |
|
"text": "(Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Equation accuracy too has several limitations. For DRAW-1K, it cannot determine which solver is better and assigns them the same score. Furthermore, it often (incorrectly) considers TD to be a worse setting than TE, as evident from decrease in the scores (for instance, on DOLPHIN-L). Recall that equation accuracy attempts to approximate derivation accuracy by choosing a random derivation agreeing with the equations, which might be incorrect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Study with Combining Datasets With several ongoing annotation efforts, it is a natural question to ask is whether we can leverage multiple datasets in training to generalize better. In Table 4, we combine DRAW-1K's train split with other datasets, and test on DRAW-1K's test split. DRAW-1K's test split was chosen as it is the largest test split with general algebra problems (recall Dolphin-L contains only number word problems).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We found that in this setting, it was important to reconcile the templates across datasets. Indeed, when we simply combine the two datasets in the TE setting, we notice a sharp drop in performance (compared to Table 3 ). However, if we reconciled all templates and then used the new equations for training (called TE * setting in Table 4 ), we were able to see improvements from training on more data. We suspect difference in annotation style led to several equivalent templates in the combined dataset, which got resolved in TE * . Therefore, in Table 4 , we compare TE * and TD settings. 7 In Table 4 , a trend similar to Table 3 can be observed -solution accuracy assigns a small improvement to TD over TE * . Derivation accuracy clearly reflects the fact that TD is superior to TE * , with a larger improvement compared to solution accuracy (eg., 5.5 vs 1.5). Equation accuracy, as before, considers TD to be worse than TE * .", |
|
"cite_spans": [ |
|
{ |
|
"start": 591, |
|
"end": 592, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 217, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 337, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 555, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 603, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 632, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Note that this experiment also shows that differences in annotation styles across different algebra problem datasets can lead to poor performance when combining these datasets naively. Our findings suggest that derivation annotation and template reconciliation are crucial for such multi-data supervision scenarios.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Results", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To ensure that the results in the previous section were not an artifact of any limitations of our solver, we show here that our solver is competitive to other state-of-the-art solvers, and therefore it is reasonable to assume that similar results can be obtained with other automatic solvers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Solvers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Table 5 , we compare our solver to KAZB, the system of , when trained under the existing supervision paradigm, TE (i.e., training on equations) and evaluated using solution accuracy. We also report the best scores on each dataset, using ZDC and SWLLR to denote the systems of Zhou et al. (2015) and Shi et al. (2015) respectively. Note that our system and KAZB are the only systems that can process all three datasets without significant modification, with our solver being clearly superior to KAZB.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 297, |
|
"text": "Zhou et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 319, |
|
"text": "Shi et al. (2015)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Solvers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We discuss some interesting examples from the datasets, to show the limitations of existing metrics, which derivation accuracy overcomes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Correct Solution, Incorrect Equation In the following example from the DOLPHIN-L dataset, by choosing the correct template and the wrong alignments, the solver arrived at the correct solutions, and gets rewarded by solution accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The sum of 2(q1) numbers is 25(q2). 12(q3) less than 4(q4) times one(q5) of the numbers is 16(q6) more than twice(q7) the other number.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Find the numbers. \u2021 SWLLR also had a solver which achieves 68.0, using over 9000 semi-automatically generated rules tailored to number word problems. We compare to their similarity based solver instead, which does not use any such rules, given that the rulebased system cannot be applied to general word problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Note that there are seven textual numbers (q 1 , . . . , q 7 ) in the word problem. We can arrive at the correct equations ({m + n = 25, 4m \u2212 2n = 16 + 12}), by the correct derivation, m + n = q 2 q 4 m \u2212 q 7 n = q 6 + q 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "However, the solver found the following derivation, which produces the incorrect equations ({m + n = 25, 2m \u2212 n = 2 + 12}),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "m + n = q 2 q 1 m \u2212 q 5 n = q 7 + q 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Both the equations have the same solutions (m = 13, n = 12), but the second derivation is clearly using incorrect reasoning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Correct Equation, Incorrect Alignment In such cases, the solver gets the right equation system, but derived it using wrong alignment. Solution accuracy still rewards the solver. Consider the problem from the DOLPHIN-L dataset,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The larger of two(q1) numbers is 2(q2) more than 4(q3) times the smaller. Their sum is 67(q4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case Study", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The correct derivation for this problem is, m \u2212 q 3 n = q 2 m + n = q 4 . However, our system generated the following derivation, which although results in the exact same equation system (and thus same solutions), is clearly incorrect due incorrect choice of \"two\", m \u2212 q 3 n = q 1 m + n = q 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that derivation accuracy will penalize the solver, as the alignment is not equivalent to the reference alignment (q 1 and q 2 are not semantically equivalent textual numbers). The correct derivation is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "q 1 m + q 2 n = q 3 q 4 m + q 5 n = q 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, we found that equation accuracy used the following incorrect derivation for evaluation,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "q 1 m + q 2 n = q 3 q 2 m + q 5 n = q 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note while this derivation does generate the correct equation system and solutions, the derivation utilizes the wrong numbers and misunderstood the word problem. This example demonstrates the needs to evaluate the quality of the word problem solvers using the annotated derivations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Find the numbers.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We discuss several aspects of previous work in the literature, and how it relates to our study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Existing Solvers Current solvers for this task can be divided into two broad categories based on their inference approach -template-first and bottom-up. Template-first approaches like Zhou et al., 2015) infer the derivation z = (T, A) sequentially. They first predict the template T and then predict alignments A from textual numbers to coefficients. In contrast, bottom-up approaches (Hosseini et al., 2014; Shi et al., 2015; Koncel-Kedziorski et al., 2015) jointly infer the derivation z = (T, A). Inference proceeds by identifying parts of the template (eg. Am + Bn) and aligning numbers to it ({2 \u2192 A, 3 \u2192 B}). At any intermediate state during inference, we have a partial derivation, describing a fragment of the final equation system (2m + 3n).", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 202, |
|
"text": "Zhou et al., 2015)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 408, |
|
"text": "(Hosseini et al., 2014;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 426, |
|
"text": "Shi et al., 2015;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 458, |
|
"text": "Koncel-Kedziorski et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "While our experiments used a solver employing the template-first approach, it is evident that performing inference in all such solvers requires constructing a derivation z = (T, A). Therefore, annotated derivations will be useful for evaluating all such solvers, and may also aid in debugging errors. Other reconciliation procedures are also discussed (though briefly) in earlier work. reconciled templates by using a symbolic solver and removing pairs with the same canonicalized form. Zhou et al. (2015) also reconciled templates, but do not describe how it was performed. We showed that reconciliation is important for correct evaluation, for reporting dataset complexity, and also when combining multiple datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 505, |
|
"text": "Zhou et al. (2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Labeling Semantic Parses Similar to our work, efforts have been made to annotate semantic parses for other tasks, although primarily for providing supervision. Prior to the works of Liang et al. (2009) and Clarke et al. (2010) , semantic parsers were trained using annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007, inter alia) , which were expensive to annotate. Recently, Yih et al. (2016) showed that labeled semantic parses for the knowledge based question answering task can be obtained at a cost comparable to obtaining answers. They showed significant improvements in performance of a questionanswering system using the labeled parses instead of answers for training. More recently, by treating word problems as a semantic parsing task, Upadhyay et al. (2016) found that joint learning using both explicit (derivation as labeled semantic parses) and implicit supervision signals (solution as responses) can significantly outperform models trained using only one type of supervision signal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 201, |
|
"text": "Liang et al. (2009)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 226, |
|
"text": "Clarke et al. (2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 313, |
|
"text": "(Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 344, |
|
"text": "Zettlemoyer and Collins, 2005;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 379, |
|
"text": "Wong and Mooney, 2007, inter alia)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 443, |
|
"text": "Recently, Yih et al. (2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 818, |
|
"text": "Upadhyay et al. (2016)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Other Semantic Parsing Tasks We demonstrated that response-based evaluation, which is quite popular for most semantic parsing problems (Zelle and Mooney, 1996; Berant et al., 2013; Liang et al., 2011 , inter alia) can overlook reasoning errors for algebra problems. A reason for this is that in algebra word problems there can be several semantic parses (i.e., derivations, both correct and incorrect) that can lead to the correct solution using the input (i.e., textual number in word problem). This is not the case for semantic parsing problems like knowledge based question answering, as correct semantic parse can often be identified given the question and the answer. For instance, paths in the knowledge base (KB), that connect the answer and the entities in the question can be interpreted as legitimate semantic parses. The KB therefore acts as a constraint which helps prune out possible semantic parses, given only the problem and the answer. However, such KB-based constraints are unavailable for algebra word problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 159, |
|
"text": "(Zelle and Mooney, 1996;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 180, |
|
"text": "Berant et al., 2013;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "Liang et al., 2011", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We proposed an algorithm for evaluating derivations for word problems. We also showed how derivation annotations can be easily obtained by only involving annotators for ambiguous cases. We augmented several existing benchmarks with derivation annotations to facilitate future comparisons. Our experiments with multiple datasets also provided insights into the right approach to combine datasets -a natural step in future work. Our main finding indicates that derivation accuracy leads to a more accurate assessment of algebra word problem solvers, finding errors which other metrics overlook. While we should strive to build such solvers using as little supervision as possible for training, having high quality annotated data is essential for correct evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The value of such annotations for evaluation becomes more immediate for online education scenarios, where such word solvers are likely to be used. Indeed, in these cases, merely arriving at the correct solution, by using incorrect reasoning may prove detrimental for teaching purposes. We believe derivation based evaluation closely mirrors how humans are evaluated in schools (by forcing solvers to show \"their work\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our datasets with the derivation annotations have applications beyond accurate evaluation. For instance, certain solvers, like the one in (Roy and Roth, 2015) , train a relevance classifier to identify which textual numbers are relevant to solving the word problem. As we only annotate relevant numbers in our annotations, our datasets can provide high quality supervision for such classifiers. The datasets can also be used in evaluation test-beds, like the one proposed in (Koncel-Kedziorski et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 158, |
|
"text": "(Roy and Roth, 2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 507, |
|
"text": "(Koncel-Kedziorski et al., 2016)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We hope our datasets will open new possibilities for the community to simulate new ideas and applications for automatic problem solvers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Lemma 1. The procedure TEMPLEQUIV returns \u0393 = \u2205 iff templates T 1 , T 2 are equivalent (w.h.p.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Proof First we prove that with high probability we are correct in claiming that a \u03b3 found by the algorithm leads to equivalence. Let probability of getting the same solution even when the template are not equivalent be (T 1 , T 2 , \u03b3) < 1. The probability that solution is same for R rounds for T 1 , T 2 which are not equivalent is \u2264 R , which can be made arbitrarily small by choosing large R. Therefore, with a large enough R, obtaining \u0393 = \u2205 from TEMPLEQUIV implies there is a \u03b3 under which templates generate equations with the same solution, and by definition, are equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Conversely, if templates are equivalent, it implies \u2203\u03b3 * such that under that mapping for any assignment, the generated equations have the same solution. As we iterate over all possible 1-1 mappings \u03b3 between the two templates, we will find \u03b3 * eventually.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Proposition Algorithm 1 returning 1 implies derivations (T p , A p ) and (T g , A g ) are equivalent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Proof Algorithm returns 1 only if TEMPLEQUIV found a \u0393 = \u2205, and \u2203\u03b3 \u2208 \u0393, following holds (q, c) \u2208 A g \u21d0\u21d2 (q, \u03b3(c)) \u2208 A p i.e., the corresponding slots aligned to the same textual number. TEMPLEQUIV found a \u0393 = \u2205 implies templates are equivalent (w.h.p). Therefore, \u2203\u03b3 \u2208 \u0393 such that the corresponding slots aligned to the same textual number implies the alignments are equivalent under mapping \u03b3. Together they imply that the derivation was equivalent (w.h.p.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Discussion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Also referred to as a template. We use these two terms interchangeably.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "available at https://aka.ms/datadraw", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that this procedure is a Monte-Carlo algorithm, and can be made more precise by increasing R. We found making R larger than 10 did not have an impact on the empirical results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotations for all problems are manually verified later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These were adjudicated on by the first author.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In TE * , the model still trains only using equations, without access to derivations. So TD is still better than TE * .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In some cases, some of the numbers in the text are rephrased (\"10ml\" to \"10 ml\") in order to allow NLP pipeline work properly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The first author was supported on a grant sponsored by DARPA under agreement number FA8750-13-2-0008. We would also like to thank Subhro Roy, Stephen Mayhew and Christos Christodoulopoulos for useful discussions and comments on earlier versions of the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We crawl over 100k problems from http:// algebra.com. The 100k word problems include some problems which require solving nonlinear equations (e.g. finding roots of quadratic equations). We filter out these problems using keyword matching. We also filter problems whose explanation do not contain a variable named \"x\". This leaves us with 12k word problems.Extracting Equations A word problem on algebra.com is accompanied by a detailed explanation provided by instructors. In our crawler, we use simple pattern matching rules to extract all the equations in the explanation. The problems often have sentences which are irrelevant to solving the word problem (e.g. \"Please help me, I am stuck.\"). During cleaning, the annotator removes such sentences from the final word problem and performs some minor editing if necessary. 8 1000 problems were randomly chosen from these pool of 12k problems, which were then shown to annotators as described earlier to get the derivation annotations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 824, |
|
"end": 825, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Creating DRAW-1K", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For simplicity, we will assume that EquivTNum is empty. The proof can easily be extended to handle the more general situation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Proof of Correctness (Sketch)", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semantic parsing on Freebase from question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Frostig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1533--1544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544, Seattle, Wash- ington, USA, October. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A question-answering system for high school algebra word problems", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bobrow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "Proceedings of the", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "591--614", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel G. Bobrow. 1964. A question-answering sys- tem for high school algebra word problems. In Pro- ceedings of the October 27-29, 1964, fall joint com- puter conference, part I, pages 591-614. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "My computer is an honor student but how intelligent is it? standardized tests as a measure of ai", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AI Magazine", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "5--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Clark and Oren Etzioni. 2016. My computer is an honor student but how intelligent is it? standard- ized tests as a measure of ai. AI Magazine, 37(1):5- 12.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Driving semantic parsing from the world's response", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Goldwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [ |
|
"Roth" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In Proceedings of the Four- teenth Conference on Computational Natural Lan- guage Learning, pages 18-27, Uppsala, Sweden, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1-8. Associ- ation for Computational Linguistics, July.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning to solve arithmetic word problems with verb categorization", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad Javad", |
|
"middle": [], |
|
"last": "Hosseini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nate", |
|
"middle": [], |
|
"last": "Kushman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "523--533", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb catego- rization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 523-533, Doha, Qatar, Octo- ber. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "How well do computers solve math word problems? large-scale dataset construction and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Danqing", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "887--896", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do comput- ers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 887-896, Berlin, Germany, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Analyticalink: An interactive learning environment for math word problem solving", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arun", |
|
"middle": [], |
|
"last": "Kulshreshth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Laviola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 21st International Conference on Intelligent User Interfaces", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "419--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Kang, Arun Kulshreshth, and Joseph J. LaViola Jr. 2016. Analyticalink: An interactive learning envi- ronment for math word problem solving. In Pro- ceedings of the 21st International Conference on In- telligent User Interfaces, pages 419-430. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Parsing algebraic word problems into equations", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Koncel-Kedziorski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siena", |
|
"middle": [], |
|
"last": "Ang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "585--597", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585-597.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mawps: A math word problem repository", |
|
"authors": [ |
|
{ |
|
"first": "Rik", |
|
"middle": [], |
|
"last": "Koncel-Kedziorski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhro", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aida", |
|
"middle": [], |
|
"last": "Amini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nate", |
|
"middle": [], |
|
"last": "Kushman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1152--1157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1152-1157, San Diego, California, June. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning to automatically solve algebra word problems", |
|
"authors": [ |
|
{ |
|
"first": "Nate", |
|
"middle": [], |
|
"last": "Kushman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "271--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 271-281, Baltimore, Maryland, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning semantic correspondences with less supervision", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "91--99", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Michael Jordan, and Dan Klein. 2009. Learning semantic correspondences with less super- vision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 91-99, Suntec, Sin- gapore, August. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Learning dependency-based compositional semantics", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "590--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 590-599, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proc. of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proc. of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Report on a general problem-solving program", |
|
"authors": [ |
|
{ |
|
"first": "Allen", |
|
"middle": [], |
|
"last": "Newell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Shaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herbert", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Simon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1959, |
|
"venue": "IFIP Congress", |
|
"volume": "256", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allen Newell, John C. Shaw, and Herbert A. Simon. 1959. Report on a general problem-solving pro- gram. In IFIP Congress, volume 256, page 64.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Solving general arithmetic word problems", |
|
"authors": [ |
|
{ |
|
"first": "Subhro", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1743--1752", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1743-1752, Lisbon, Portugal, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatically solving number word problems by semantic parsing and reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Shuming", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuehui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojiang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Rui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1132--1142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving number word problems by semantic parsing and rea- soning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 1132-1142, Lisbon, Portugal, September. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems", |
|
"authors": [ |
|
{ |
|
"first": "Shyam", |
|
"middle": [], |
|
"last": "Upadhyay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "297--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shyam Upadhyay, Ming-Wei Chang, Kai-Wei Chang, and Wen-tau Yih. 2016. Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems. In Proceedings of EMNLP, pages 297- 306.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Learning synchronous grammars for semantic parsing with lambda calculus", |
|
"authors": [ |
|
{ |
|
"first": "Yuk", |
|
"middle": [ |
|
"Wah" |
|
], |
|
"last": "Wong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuk Wah Wong and Raymond Mooney. 2007. Learn- ing synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Annual Meeting of the Association of Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "960--967", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association of Computational Linguistics, pages 960-967, Prague, Czech Repub- lic, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The value of semantic parse labeling for knowledge base question answering", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Wen-Tau Yih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Richardson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Meek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jina", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Suh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "201--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wen-tau Yih, Matthew Richardson, Chris Meek, Ming- Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base ques- tion answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 201-206, Berlin, Germany, August. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learning to Parse Database Queries using Inductive Logic Proramming", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Zelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. M. Zelle and R. J. Mooney. 1996. Learning to Parse Database Queries using Inductive Logic Pro- ramming. In AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", |
|
"authors": [ |
|
{ |
|
"first": "Luke", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "UAI '05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "658--666", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Struc- tured classification with probabilistic categorial grammars. In UAI '05, Proceedings of the 21st Con- ference in Uncertainty in Artificial Intelligence, Ed- inburgh, Scotland, July 26-29, 2005, pages 658-666.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learn to solve algebra word problems using quadratic programming", |
|
"authors": [ |
|
{ |
|
"first": "Lipu", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuaixiang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "817--822", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 817-822, Lisbon, Portugal, September. Association for Computational Linguis- tics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "An algebra word problem with its solution, equation system and derivation. Evaluating solvers on derivation is more reliable than evaluating on solution or equation system, as it reveals errors that other metric overlook.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Unigrams and bigrams of lemmas and POS tags from the word problem x, conjoined with |Q(x)| and |C(T )|.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Bad Approx. in Equation Accuracy The following word problem is from the ALG-514 dataset:Mrs. Martin bought 3(q1) cups of coffee and 2(q2) bagels and spent 12.75(q3) dollars. Mr. Martin bought 2(q4) cups of coffee and 5(q5) bagels and spent 14.00(q6) dollars. Find the cost of one(q7) cup of coffee and that of one(q8) bagel.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "Statistics of the datasets. At least 20% of problems in each dataset had alignment ambiguities that required human annotations. The number of templates before and after annotation is also shown (reduction > 20%).", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"text": "When combining two datasets, it is essential to reconcile templates across datasets. Here TE * denotes training on equations after reconciling the templates, while TE simply combines datasets naively. As TE * represents a more appropriate setting, we compare TE * and TD in this experiment.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"text": "Comparison of our solver and other state-of-the-art systems, when trained under TE setting. All numbers are solution accuracy. See footnote for details on the comparison to SWLLR.", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |