Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:36:34.156642Z"
},
"title": "Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {
"settlement": "Urbana",
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Virginia",
"location": {
"settlement": "Charlottesville",
"region": "VA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatically solving algebra word problems has raised considerable interest recently. Existing state-of-the-art approaches mainly rely on learning from human annotated equations. In this paper, we demonstrate that it is possible to efficiently mine algebra problems and their numerical solutions with little to no manual effort. To leverage the mined dataset, we propose a novel structured-output learning algorithm that aims to learn from both explicit (e.g., equations) and implicit (e.g., solutions) supervision signals jointly. Enabled by this new algorithm, our model gains 4.6% absolute improvement in accuracy on the ALG-514 benchmark compared to the one without using implicit supervision. The final model also outperforms the current state-of-the-art approach by 3%.",
"pdf_parse": {
"paper_id": "D16-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatically solving algebra word problems has raised considerable interest recently. Existing state-of-the-art approaches mainly rely on learning from human annotated equations. In this paper, we demonstrate that it is possible to efficiently mine algebra problems and their numerical solutions with little to no manual effort. To leverage the mined dataset, we propose a novel structured-output learning algorithm that aims to learn from both explicit (e.g., equations) and implicit (e.g., solutions) supervision signals jointly. Enabled by this new algorithm, our model gains 4.6% absolute improvement in accuracy on the ALG-514 benchmark compared to the one without using implicit supervision. The final model also outperforms the current state-of-the-art approach by 3%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Algebra word problems express mathematical relationships via narratives set in a real-world scenario, such as the one below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Maria is now four times as old as Kate. Four years ago, Maria was six times as old as Kate. Find their ages now.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The desired output is an equation system which expresses the mathematical relationship symbolically: m = 4 \u00d7 n and m \u2212 4 = 6 \u00d7 (n \u2212 4) where m and n represent the age of Maria and Kate, respectively. The solution (i.e., m = 40, n = 10) can be found by a mathematical engine given the equation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Building efficient automatic algebra word problem solvers have clear values for online education scenarios. The challenge itself also provides a good test bed for evaluating an intelligent agent that understands natural languages, a direction advocated by artificial intelligence researchers (Clark and Etzioni, 2016) .",
"cite_spans": [
{
"start": 292,
"end": 317,
"text": "(Clark and Etzioni, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One key challenge of solving algebra word problems is the lack of fully annotated data (i.e., the annotated equation system associated with each problem). In contrast to annotating problems with binary or categorical labels, manually solving algebra word problems to provide correct equations is time consuming. As a result, existing benchmark datasets are small, limiting the performance of supervised learning approaches. However, thousands of algebra word problems have been posted and discussed in online forums, where the solutions can be easily mined, despite the fact that some of them could be incorrect. It is thus interesting to ask whether a better algebra problem solver can be learned by leveraging these noisy and implicit supervision signals, namely the solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we address the technical difficulty of leveraging implicit supervision in learning an algebra word problem solver. We argue that the effective strategy is to learn from both explicit and implicit supervision signals jointly. In particular, we design a novel online learning algorithm based on structured-output Perceptron. By taking both kinds of training signals together as input, the algorithm iteratively improves the model, while at the same time it uses the intermediate model to find candidate equation systems for problems with only numerical solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are summarized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a novel learning algorithm (Section 3 and 4) that jointly learns from both explicit and implicit supervision. Under different settings, the proposed algorithm outperforms the existing supervised and weakly supervised algorithms (Section 6) for algebra word problems. \u2022 We mine the problem-solution pairs for algebra word problems from an online forum and show that we can effectively obtain the implicit supervision with little to no manual effort (Section 5). 1 \u2022 By leveraging both implicit and explicit supervision signals, our final solver outperforms the state-of-the-art system by 3% on ALG-514, a popular benchmark data set proposed by .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatically solving mathematical reasoning problems expressed in natural language has been a long-studied problem (Bobrow, 1964; Newell et al., 1959; Mukherjee and Garain, 2008) . Recently, created a template-base search procedure to map word problems into equations. Then, several following papers studied different aspects of the task: Hosseini et al. (2014) focused on improving the generalization ability of the solvers by leveraging extra annotations; Roy and Roth (2015) focused on how to solve arithmetic problems without using any pre-defined template.",
"cite_spans": [
{
"start": 116,
"end": 130,
"text": "(Bobrow, 1964;",
"ref_id": "BIBREF3"
},
{
"start": 131,
"end": 151,
"text": "Newell et al., 1959;",
"ref_id": "BIBREF19"
},
{
"start": 152,
"end": 179,
"text": "Mukherjee and Garain, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 340,
"end": 362,
"text": "Hosseini et al. (2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In (Shi et al., 2015) , the authors focused on number word problems and proposed a system that is created using semi-automatically generated rules. In Zhou et al. (2015) , the authors simplified the inference procedure and pushed the state-of-the-art benchmark accuracy. The idea of learning from implicit supervision is discussed in Zhou et al., 2015; Koncel-Kedziorski et al., 2015) , where the authors train the algebra solvers using only the solutions with little or no annoated equation systems. We discuss this in detail in Section 4.",
"cite_spans": [
{
"start": 3,
"end": 21,
"text": "(Shi et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 151,
"end": 169,
"text": "Zhou et al. (2015)",
"ref_id": "BIBREF31"
},
{
"start": 334,
"end": 352,
"text": "Zhou et al., 2015;",
"ref_id": "BIBREF31"
},
{
"start": 353,
"end": 384,
"text": "Koncel-Kedziorski et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Solving automatic algebra word problems can be viewed as a semantic parsing task. In the semantic parsing community, the technique of learning from implicit supervision signals has been applied (under the name response-driven learning (Clarke et al., 2010) ) to knowledge base question answering tasks such as Geoquery (Zelle and Mooney, 1996) and WebQuestions (Berant et al., 2013) or mapping instructions to actions (Artzi and Zettlemoyer, 2013) . In these tasks, researchers have shown that it is possible to train a semantic parser only from questionanswer pairs, such as \"What is the largest state bordering Texas?\" and \"New Mexico\" (Clarke et al., 2010; Yih et al., 2015) .",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Clarke et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 319,
"end": 343,
"text": "(Zelle and Mooney, 1996)",
"ref_id": "BIBREF29"
},
{
"start": 361,
"end": 382,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 418,
"end": 447,
"text": "(Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 638,
"end": 659,
"text": "(Clarke et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 660,
"end": 677,
"text": "Yih et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One key reason that such implicit supervision is effective is because the correct semantic parses of the questions can often be found using the answers and the knowledge base alone, with the help of heuristics developed for the specific domain. For instance, when the question is relatively simple and does not have complex compositional structure, paths in the knowledge graph that connect the answers and the entities in the narrative can be interpreted as legitimate semantic parses. However, as we will show in our experiments, learning from implicit supervision alone is not a viable strategy for algebra word problems. Compared to the knowledge base question answering problems, one key difference is that a large number (potentially infinitely many) of different equation systems could end up having the same solutions. Without a database or special rules for combining variables and coefficients, the number of candidate equation systems cannot be trimmed effectively, given only the solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "From the algorithmic point of view, our proposed learning framework is related to several lines of work. Similar efforts have been made to develop latent structured prediction models (Yu and Joachims, 2009; Chang et al., 2013; Zettlemoyer and Collins, 2007) to find latent semantic structures which best explain the answer given the question. Our algorithm is also influenced by the discriminative reranking algorithms (Collins, 2000; Ge and Mooney, 2006; Charniak and Johnson, 2005) and models for learning from intractable supervision (Steinhardt and Liang, 2015) .",
"cite_spans": [
{
"start": 183,
"end": 206,
"text": "(Yu and Joachims, 2009;",
"ref_id": "BIBREF28"
},
{
"start": 207,
"end": 226,
"text": "Chang et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 257,
"text": "Zettlemoyer and Collins, 2007)",
"ref_id": "BIBREF30"
},
{
"start": 419,
"end": 434,
"text": "(Collins, 2000;",
"ref_id": "BIBREF8"
},
{
"start": 435,
"end": 455,
"text": "Ge and Mooney, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 456,
"end": 483,
"text": "Charniak and Johnson, 2005)",
"ref_id": "BIBREF5"
},
{
"start": 537,
"end": 565,
"text": "(Steinhardt and Liang, 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Huang et al. (2016) collected a large number of noisily annotated word problems from online forums. While they collected a large-scale dataset, unlike our work, they did not demonstrate how to utilize the newly crawled dataset to improve existing systems. It will be interesting to see if our proposed algorithm can make further improvements using their newly collected dataset. 2 Table 1 lists all the symbols representing the components in the process. The input algebra word problem is denoted by x, and the output y = (T, A) is called a derivation, which consists of an equation system template T and an alignment A. A template T is a family of equation systems parameterized by a set of coefficients",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Huang et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "C(T ) = {c i } k i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": ", where each coefficient c i aligns to a textual number (e.g., four) in a word problem. Let Q(x) be all the textual numbers in the problem x, and C(T ) be the coefficients to be determined in the template T . An alignment is a set of tuples",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "A = {(q, c) | q \u2208 Q(x), c \u2208 C(T ) \u222a { }},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "where the tuple (q, ) indicates that the number q is not relevant to the final equation system. By specifying the value of each coefficient, it identifies an equation system belonging to the family represented by template T . Together, T and A generate a complete equation system, and the solution z can be derived by the mathematical engine E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Following Zhou et al., 2015) , our strategy of mapping a word problem to an equation system is to first choose a template that consists of variables and coefficients, and then align each coefficient to a textual number mentioned in the problem. We formulate the mapping between an algebra word problem and an equation system as a structured learning problem. The output space is the set of all possible derivations using templates that are observed in the training data. Our model maps x to y = (T, A) by a linear scoring function w T \u03a6(x, y), where w is the model parameters and \u03a6 is the feature functions. At test time, our model scores all the derivation candidates and picks the best one according to the model score. We often refer to y as a semantic parse, as it represents the semantics of the algebra word problem. 2 The dataset has not been made public at the time of publication. to find the correct derivation, as multiple derivations may lead to the same solution. Therefore, the learning algorithm has to explore the output space to guide the model in order to match the annotated response.",
"cite_spans": [
{
"start": 10,
"end": 28,
"text": "Zhou et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 823,
"end": 824,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Properties of Implicit Supervision Signals We discuss some key properties of the implicit supervision signal to explain several design choices of our algorithm. Figure 1 illustrates the main differences between implicit and explicit supervision signals. Algorithms that learn from implicit supervision signals face the following challenges. First, the learning system usually does not model directly the correlations between the input x and the solution z. Instead, the mapping is handled by an external procedure such as a mathematical engine. Therefore, E(y) is effectively a one-directional function. As a result, finding semantic parses (derivations) from responses (solutions) E \u22121 (z) can sometimes be very slow or even intractable. Second, in many cases, even if we could find a semantic parse from responses, multiple combinations of templates and alignments could end up with the same solution set (e.g., the solutions of equations 2 + x = 4 and 2 \u00d7 x = 4 are the same). Therefore, the implicit supervision signals may be incomplete and noisy, and using the solutions alone to guide the training procedure might not be sufficient. Finally, since we need to have a complete derivation before we can observe the response of the mathematical engine E, we cannot design efficient inference methods such as dynamic programming algorithms based on partial feedback. As a result, we have to perform exploration during learning to search for fully constructed semantic parses that can generate the correct solution.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3"
},
{
"text": "Word Problem x Maria is now four times as old as Kate. Four years ago, Maria was six times as old as Kate. Find their ages now.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Example",
"sec_num": null
},
{
"text": "Derivation (Semantic Parse) y = (T, A) ({m \u2212 a \u00d7 n = \u22121 \u00d7 a \u00d7 b + b, m \u2212 c \u00d7 n = 0}, A) Solution z n = 10, m = 40",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Example",
"sec_num": null
},
{
"text": "Mathematical Engine E : y \u2192 z After determining the coefficients, the equation system is {m = 4 \u00d7 n, m \u2212 4 = 6 \u00d7 (n \u2212 4)}. The solution is thus n = 10, m = 40. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Example",
"sec_num": null
},
{
"text": "Variables v m, n Textual Number 3 Q(x) {four, Four, six} Equation System Template T {m \u2212 a \u00d7 n = \u22121 \u00d7 a \u00d7 b + b, m \u2212 c \u00d7 n = 0} Coefficients C(T ) a, b, c Alignment A six \u2192 a, Four \u2192 b, four \u2192 c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symbol Example",
"sec_num": null
},
{
"text": "We assume that we have two sets: D e = {(x e , y e )} and D m = {(x m , z m )}. D e contains the fully annotated equation system y e for each algebra word problem x e , whereas in D m , we have access to the numerical solution z m to each problem, but not the equation system (y m = \u2205). We refer to D e as the explicit set and D m as the implicit set. For the sake of simplicity, we explain our approach by modifying the training procedure of the structured Perceptron algorithm (Collins, 2002) . 4 As discussed in Section 3, the key challenge of learning from implicit supervision is that the mapping E(y) is one-directional. Therefore, the correct equation system cannot be easily derived from the numerical solution. Intuitively, for data with only implicit supervision, we can explore the structure space Y and find the best possible derivation\u1ef9 \u2208 Y according to the current model. If E(\u1ef9) matches z, then we can update the model based on\u1ef9. Following this intuition, we propose MixedSP (Algorithm 1).",
"cite_spans": [
{
"start": 479,
"end": 494,
"text": "(Collins, 2002)",
"ref_id": "BIBREF9"
},
{
"start": 497,
"end": 498,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "For each example, we use an approximate search algorithm to collect top scoring candidate structures. The algorithm first ranks the top-K templates according to the model score, and forms a candidate set by expanding all possible derivations that use the K templates (Line 3). The final candidate set is \u2126 = {y 1 , y 2 , . . . , y K } \u2282 Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "When the explicit supervision is available (i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "(x i , y i ) \u2208 D e )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": ", our algorithm follows the standard structured prediction update procedure. We find the best scoring structure\u0177 in \u2126 and then update the model using the difference of the feature vectors between the gold output structure y i and the best scoring structure\u0177 (Line 6). When only implicit supervision is available (i.e., (x i , z i ) \u2208 D m ), our algorithm uses the current model to conduct a guided exploration, which iteratively finds structures that best explain the implicit supervision, and use the explanatory structure for making updates. As mentioned in Section 3, we have to explore and examine each structure in the candidate set \u2126. This is due the fact that partial structure cannot be used for finding the right response, as getting response E(y) requires complete derivations. In Line 9, we want to find the derivations y where its solution E(y) matches the implicit supervision z i . More specifically,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = arg min y\u2208\u2126 \u2206(E(y), z i ),",
"eq_num": "(1)"
}
],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "where \u2206 is a loss function to estimate the disagreement between E(y) and z i . In our experiments, we simply set \u2206(E(y), z i ) to be 0 if the solution partially matches, and 1 otherwise. 5 If more than one derivation achieves the minimal value of \u2206(E(y), z i ), we break ties by choosing the derivation with higher score w T \u03c6(x i , y). This tie-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Algorithm 1 Structured Perceptron with Mixed Super- vision. (MixedSP) Input: D e , D m , L = |D e | + |D m |, T , K, \u03b3 \u2208 [0, 1) 1: for t = 1 . . . N do training epochs 2: for i = 1 . . . L do 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "\u2126 \u2190 find top-K structures {y} approxi- mately 4: if y i = \u2205 then explicit supervision 5:\u0177 \u2190 arg max y\u2208\u2126 w T \u03c6(x i , y) 6: w \u2190 w + \u03b7 (\u03c6(x, y i ) \u2212 \u03c6(x,\u0177)) 7: else if t \u2265 \u03b3N then implicit supervision 8:\u0177 \u2190 arg max y\u2208\u2126 w T \u03c6(x i , y) 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Pick\u1ef9 from \u2126 by Eq. (1). exploration 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "w \u2190 w + \u03b7 (\u03c6(x,\u1ef9) \u2212 \u03c6(x,\u0177)) 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Return the average of w breaking strategy is important -in practice, several derivations may lead to the gold numerical solution; however, only few of them are correct. The tiebreaking strategy relies on the current model and the structured features \u03c6(x i , y) to filter out incorrect derivations during training. Finally, the model is updated using\u1ef9 in Line 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Similar to curriculum learning (Bengio et al., 2009) , it is important to control when the algorithm starts exploring the output space using weak supervision. Exploring too early may mislead the model, as the structured feature weights w may not be able to help filter out incorrect derivations, while exploring too late may lead to under-utilization of the implicit supervision. We use the parameter \u03b3 to control when the model starts to learn from implicit supervision signals. The parameter \u03b3 denotes the fraction of the training time that the model uses purely explicit supervision.",
"cite_spans": [
{
"start": 31,
"end": 52,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Key Properties of Our Algorithm The idea of using solutions to train algebra word problem solvers has been discussed in and (Zhou et al., 2015) . However, their implicit supervision signals are created from clean, fully supervised data, and the experiments use little to no explicit supervision examples. 6 While their algorithms are interesting, the experimental setting is somewhat unrealistic as the implicit signals are simulated.",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 305,
"end": 306,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "On the other hand, the goal of our algorithm is to significantly improve a strong solver with a large quantity of unlabeled data. Moreover, our implicit supervision signals are noisier given that we crawled the data automatically, and the clean labeled equation systems are not available to us. As a result, we have made several design choices to address issues of learning from noisy implicit supervision signals in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "First, the algorithm is designed to perform updates conservatively. Indeed, in Line 10, the algorithm will not perform an update if the model could not find any parses matching the implicit signals in Line 9. That is, if \u2206(E(y), z i ) = 1 for all y \u2208 \u2126, y =\u0177 due to the tie-breaking mechanism. This ensures that the algorithm drives the learning using only those structures which lead to the correct solution, avoiding undesirable effects of noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Second, the algorithm does not use implicit supervision signals in the early stage of model training. Learning only on clean and explicit supervision helps derive a better intermediate model, which later allows exploring the output space more efficiently using the implicit supervision signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "Existing semantic parsing algorithms typically use either implicit or explicit supervision signals exclusively (Zettlemoyer and Collins, 2007; Berant et al., 2013; Artzi and Zettlemoyer, 2013) . In contrast, MixedSP makes use of both explicit and implicit supervised examples mixed at the training time.",
"cite_spans": [
{
"start": 111,
"end": 142,
"text": "(Zettlemoyer and Collins, 2007;",
"ref_id": "BIBREF30"
},
{
"start": 143,
"end": 163,
"text": "Berant et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 164,
"end": 192,
"text": "Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning from Mixed Supervision",
"sec_num": "4"
},
{
"text": "In this section, we describe the process of collecting SOL-2K, a data set containing question-solution pairs of algebra word problems from a Web forum 7 , where students and tutors interact to solve math problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Implicit Supervision Signals",
"sec_num": "5"
},
{
"text": "A word problem posted on the forum is often accompanied by a detailed explanation provided by tutors, which includes a list of the relevant equations. However, these posted equations are not suitable for direct use as labeled data, as they are often imprecise or incomplete. For instance, tutors often omit many simplification steps when writing the equations. A commonly observed example is that (5-3) x+2y would be directly written as 2x+2y. Despite being mathematically equivalent, learning from the latter equation is not desirable as the model may learn that 5 and 3 appearing the text are irrelevant. An extreme case of this is when tutors directly post the solution (such as x=2 and y=5), without writing any equations. Another observation is that tutors often write two-variable equation systems with only one variable. For example, instead of writing x+y=10, x-y=2, many tutors pre-compute x=10-y using the first equation and substitute it in the second one, which results in 10-y-y=2. It is also possible that the tutor wrote the incorrect equation system, but while explaining the steps, made corrections to get the right answer. These practical issues make it difficult to use the crawled equations for explicit supervision directly.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 402,
"text": "(5-3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mining Implicit Supervision Signals",
"sec_num": "5"
},
{
"text": "On the other hand, it is relatively easy to obtain question-solution pairs with simple heuristics. We use a simple strategy to generate the solution from the extracted equations. We greedily select equations in a top-down manner, declaring success as soon as we find an equation system that can be solved by a mathematical engine (we used SymPy (Sympy Development Team, 2016)). Equations that cause an exception in the solver (due to improper extraction) are rejected. Note that the solution thus found may be incorrect (making the mined supervision noisy), as the equation system used by the solver may contain an incorrect equation. To ensure the quality of the mined supervision, we use several simple rules to further filter the problems. For example, we remove questions that have more than 15 numbers. We found that usually such questions were not a single word problem, but instead concatenations of several problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Implicit Supervision Signals",
"sec_num": "5"
},
{
"text": "Note that our approach relies only on a few rules and a mathematical engine to generate (noisy) implicit supervision from crawled problems, with no human involvement. Once the solutions are generated, we discarded the equation systems used to obtain them. Using this procedure, we collected 2,039 question-solution pairs. For example, the solution to the following mined problem was \"6\" (The correct solutions are 6 and 12.):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Implicit Supervision Signals",
"sec_num": "5"
},
{
"text": "Roz is twice as old as Grace. In 5 years the sum of their ages will be 28. How old are they now? ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining Implicit Supervision Signals",
"sec_num": "5"
},
{
"text": "In this section, we demonstrate the effectiveness of the proposed approach and empirically verify the design choices of the algorithm. We show that our joint learning approach leverages mined implicit supervision effectively, improving system performance without using additional manual annotations (Section 6.1). We also compare our approach to existing methods under different supervision settings (Section 6.2). Table 2 shows the statistics of the datasets. The ALG-514 dataset consists of 514 algebra word problems, ranging over a variety of narrative scenarios (object counting, simple interest, etc.). Although it is a popular benchmark for evaluating algebra word solvers, ALG-514 has only 24 templates. To test the generality of different approaches, we thus conduct experiments on a newly released data set, DRAW-1K 8 (Upadhyay and Chang, 2016) , which covers more than 200 templates and contains 1,000 algebra word problems. The data is split into training, development, and test sets, with 600/200/200 examples, respectively. The SOL-2K dataset contains the word problemsolution pairs we mined from online forum (see Section 5). Unlike ALG-514 and DRAW-1K, there are no annotated equation systems in this dataset, and only the solutions are available. Also, no preprocessing or cleaning is performed, so the problem descriptions might contain some irrelevant phrases such as \"please help me\". Since all the datasets are generated from online forums, we carefully examined and removed problems from SOL-2K that are identical to problems in ALG-514 and DRAW-1K, to ensure fairness. We set the number of iterations to 15 and the learning rate \u03b7 to be 1.",
"cite_spans": [
{
"start": 827,
"end": 853,
"text": "(Upadhyay and Chang, 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "For all experiments, we report solution accuracy (whether the solution was correct). Following Kushman et al. (2014), we ignore the ordering of answers when calculating the solution accuracy. We report the 5-fold cross validation accuracy on ALG-514 in order to have a fair comparison with previous work. For DRAW-1K, we report the results on the test set. In all the experiments, we only use the templates that appear in the corresponding explicit supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": null
},
{
"text": "Following (Zhou et al., 2015) , we do not model the alignments between noun phrases and variables. We use a similar set of features introduced in (Zhou et al., 2015) , except that our solver does not use rich NLP features from dependency parsing or coreference-resolution systems. We follow and set the beam-size K to 10, unless stated otherwise.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 146,
"end": 165,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": null
},
{
"text": "Supervision Protocols We compare the following training protocols:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "\u2022 Explicit (D = {(x e , y e )}): the standard setting, where fully annotated examples are used to train the model (we use the structured Perceptron algorithm as our training algorithm here).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "\u2022 Implicit (D = {(x m , z m ))}): the model is trained on SOL-2K dataset only (i.e., only implicit supervision). This setting is similar to the one in Clarke et al., 2010) .",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "Clarke et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "\u2022 Pseudo (D = {(x m ,Z \u22121 (z m , x m ))}):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "where we useZ \u22121 (z, x) to denote a pseudo derivation whose solutions match the mined solutions. Similar to the approach in (Yih et al., 2015) for question answering, here we attempts to recover (possibly incorrect) explicit supervision from the implicit supervision by finding parses whose solution matches the mined solution. For each word problem, we generated a pseudo derivationZ \u22121 (z, x) by finding the equation systems whose solutions that match the mined solutions. We conduct a brute force search to findZ \u22121 (z, x) by enumerating all possible derivations. Note that this process can be very slow for datasets like DRAW-1K because the brute-force search needs to examine more than 200 templates for each word problem. Ties are broken by random.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "\u2022 E+P (D = {(x e , y e )}\u222a {(x m ,Z \u22121 (z m , x m ))}):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "a baseline approach that jointly learns by combining the dataset generated by Pseudo with the Explicit supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "\u2022 MixedSP (D = {(x e , y e )} \u222a {(x m , z m ))}): the setting used by our proposed algorithm. The algorithm trained the word problem solver using both explicit and implicit supervision jointly. We set the parameter \u03b3 to 0.5 unless otherwise stated. In other words, the first half of the training iterations use only explicit supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "Note that Explicit, E+P, and MixedSP use the same amount of labeled equations, although E+P and MixedSP use additional implicit supervised resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "Results Table 3 lists the main results. With implicit supervision from mined question-solution pairs, MixedSP outperforms Explicit by around 4.5% on both datasets. This verifies the claim that the joint learning approach can benefit from the noisy implicit supervision. Note that with the same amount of supervision signals, E+P performs poorly and even under-performs Explicit. The reason is that the derived derivations in SOL-2K can be noisy. Indeed, we found that about 70% of the problems in the implicit set have more than one template that can produce a derivation which matches the mined solutions. Therefore, the pseudo derivation selected by the system might be wrong, even if they generate the correct answers. As a result, E+P can commit to the possibly incorrect pseudo derivations before training, and suffer from error propagation. In contrast, MixedSP does not commit to a derivation and allows the model to choose the one best explaining the implicit signals as training progresses.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "As expected, using only the implicit set D m performs poorly. The reason is that in both Implicit and Pseudo settings, the algorithm needs to select one from many derivations that match the labeled solutions, and use the selected derivation to update the model. When there are no explicit supervision signals, the model can use incorrect derivations to update the model. As a result, models on both Implicit and Pseudo settings perform significantly worse than the Explicit baseline in both datasets, even if the size of SOL-2K is larger than the fully supervised data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning from Mixed Supervision",
"sec_num": "6.1"
},
{
"text": "We now compare to previous approaches for solving algebra word problems, both in fully supervised and weakly supervised settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons to Previous Work",
"sec_num": "6.2"
},
{
"text": "We first compare our systems to the systems that use the same level of explicit supervision (fully labeled examples). The comparison between our system and existing systems are in Fig 2a and 2b . Compared to previous systems that were trained only on explicit signals, our Explicit baseline is quite competitive. On ALG-514, the accuracy of our baseline system is 78.4%, which is 1.3% lower than the best reported accuracy achieved by the system ZDC15 (Zhou et al., 2015) . We suspect that this is due to the richer feature set used by ZDC15, which includes features based on POS tags, coreference and dependency parses, whereas our system only uses features based on POS tags. Our system is also the best system on DRAW-1K, and performs much better than the system KAZB14 . Note that we could not run the system ZDC15 on DRAW-1K because it can only handle limited types of equation systems. Although the Explicit baseline is strong, the MixedSP algorithm is still able to improve the solver significantly through noisy implicit supervision signals without using manual annotation of equation systems.",
"cite_spans": [
{
"start": 452,
"end": 471,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 180,
"end": 193,
"text": "Fig 2a and 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparisons of Overall Systems",
"sec_num": null
},
{
"text": "In the above comparisons, MixedSP benefits from the mined implicit supervision as well as using Algorithm 1. Since there are several practical limita-tions for us to run previously proposed weakly supervised algorithms in our settings, in the following, we perform a direct comparison between MixedSP and existing algorithms in their corresponding settings. Note that the implicit supervision in weak supervision settings proposed in earlier work is noisefree, as it was simulated by hiding equation systems of a manually annotated dataset. Zhou et al. (2015) proposed a weak supervision setting where the system was provided with the set of all templates, as well as the solutions of all problems during training. Under this setting, they reported 72.3% accuracy on ALG-514. Note that such high accuracy can be achieved mainly because that the complete and correct templates were supplied.",
"cite_spans": [
{
"start": 541,
"end": 559,
"text": "Zhou et al. (2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons of Weakly Supervised Algorithms",
"sec_num": null
},
{
"text": "In this setting, running the MixedSP algorithm is equivalent to using the Implicit setting with clean implicit supervision signals. Surprisingly, MixedSP can obtain 74.3% accuracy, surpassing the weakly supervised model in (Zhou et al., 2015) on ALG-514. Compared to the results in Table 3 , note that when using noisy implicit signals, it cannot obtain the same level of results, even though we had more training problems (2,000 mined problems instead of 514 problems). This shows that working with real, noisy weak supervision is much more challenging than working on simulated, noise-free, weak supervision. proposed another weak supervision setting (5EQ+ANS in the paper), in which explicit supervision is provided for only 5 problems in the training data. For the rest of problems, only their solutions are provided. The 5 problems are chosen such that their templates constitute the 5 most common templates in the dataset. This weak supervision setting is harder than that of (Zhou et al., 2015) , as the solver only has the templates for 5 problems, instead of the templates for all problems. Under this setting, our MixedSP algorithm achieves 53.8%, which is better than 46.1% reported in .",
"cite_spans": [
{
"start": 223,
"end": 242,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 982,
"end": 1001,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Comparisons of Weakly Supervised Algorithms",
"sec_num": null
},
{
"text": "In Figure 2c , we investigate the impact of tuning \u03b3 in MixedSP on the dataset ALG-514. Recall that \u03b3 controls the fraction of the training time that the model uses solely explicit supervision. At first glance, it may appear that we should utilize the im- Figure 2 : (a) Comparisons between our system to state-of-the-art systems on ALG-514. ZDC15 is the system proposed in (Zhou et al., 2015) , and KAZB14 is the system proposed in . (b) Comparisons between our system and other systems on DRAW-1K. Note that we are not able to run ZDC15 on DRAW-1K because it cannot handle some equation systems in the dataset. (c) Analysis of the impact of \u03b3 in MixedSP.",
"cite_spans": [
{
"start": 374,
"end": 393,
"text": "(Zhou et al., 2015)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 3,
"end": 12,
"text": "Figure 2c",
"ref_id": null
},
{
"start": 256,
"end": 264,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.3"
},
{
"text": "plicit supervision throughout training (set \u03b3 = 0). But setting \u03b3 to 0 hurts overall performance, suggesting in this setting that the algorithm uses a weak model to guide the exploration for using implicit supervision. On the other hand, by delaying exploration (\u03b3 > 0.5) for too long, the model could not fully utilize the implicit supervision. We observe similar trend on DRAW-1K as well. We found \u03b3 = 0.5 works well across the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.3"
},
{
"text": "We also analyze the impact of the parameter K, which controls the size of the candidate set \u2126 in MixedSP. Specifically, for DRAW-1K, when setting K to 5 and 10, the accuracy of MixedSP is at 59.5%. On setting K to 15, the accuracy of MixedSP improves to 61%. We suspect that enlarging K increases the chance to have good structures in the candidate set that can match the correct responses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6.3"
},
{
"text": "In this paper, we propose an algorithmic approach for training a word problem solver based on both explicit and implicit supervision signals. By extracting the question answer pairs from a Web-forum, we show that the algebra word problem solver can be improved significantly using our proposed technique, surpassing the current state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Recent advances in deep learning techniques demonstrate that the error rate of machine learning models can decrease dramatically when large quantities of labeled data are presented (Krizhevsky et al., 2012) . However, labeling natural language data has been shown to be expensive, and it has become one of the major bottleneck for advancing natural language understanding techniques (Clarke et al., 2010) . We hope the proposed approach can shed light on how to leverage data on the web, and eventually improves other semantic parsing tasks such as knowledge base question answering and mapping natural instructions to actions.",
"cite_spans": [
{
"start": 181,
"end": 206,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 383,
"end": 404,
"text": "(Clarke et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The new resource and the dataset we used for training is available soon on https://aka.ms/dataimplicit and https://aka.ms/datadraw",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our approach can be easily extended to other structured learning algorithms such as Structured SVM(Taskar et al., 2004;Tsochantaridis et al., 2004).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The mined solutions are often incomplete for some variables (e.g. solution y=6 but no value for x could be mined). We allow partial matches so that the model can learn from the incomplete implicit signals as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Prior work has used only 5 explicit supervision examples when training with solutions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.algebra.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://aka.ms/datadraw",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Weakly supervised learning of semantic parsers for mapping instructions to actions",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of TACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping in- structions to actions. In Proc. of TACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proc. of ICML.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Semantic parsing on Freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proc. of EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A question-answering system for high school algebra word problems",
"authors": [
{
"first": "G",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bobrow",
"suffix": ""
}
],
"year": 1964,
"venue": "Fall Joint Computer Conference, Part I",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel G. Bobrow. 1964. A question-answering system for high school algebra word problems. In Proceed- ings of the October 27-29, 1964, Fall Joint Computer Conference, Part I.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A constrained latent variable model for coreference resolution",
"authors": [
{
"first": "K.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.-W. Chang, R. Samdani, and D. Roth. 2013. A con- strained latent variable model for coreference resolu- tion. In Proc. of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coarse-tofine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proc. of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "My computer is an honor student-but how intelligent is it? Standardized tests as a measure of AI",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2016,
"venue": "AI Magazine",
"volume": "37",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark and Oren Etzioni. 2016. My computer is an honor student-but how intelligent is it? Standardized tests as a measure of AI. AI Magazine., 37(1).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Driving semantic parsing from the world's response",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world's response. In Proc. of CoNLL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 2000. Discriminative reranking for natural language parsing. In Proc. of ICML.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. of EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Discriminative reranking for semantic parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Ge and R. Mooney. 2006. Discriminative reranking for semantic parsing. In Proc. of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to solve arithmetic word problems with verb categorization",
"authors": [
{
"first": "Javad Mohammad",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Nate",
"middle": [],
"last": "Kushman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javad Mohammad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proc. of EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How well do computers solve math word problems? Large-scale dataset construction and evaluation",
"authors": [
{
"first": "Danqing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. 2016. How well do computers solve math word problems? Large-scale dataset con- struction and evaluation. In Proc. of ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parsing algebraic word problems into equations",
"authors": [
{
"first": "Rik",
"middle": [],
"last": "Koncel-Kedziorski",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Siena",
"middle": [],
"last": "Dumas Ang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of TACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Proc. of TACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proc. of NIPS.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to automatically solve algebra word problems",
"authors": [
{
"first": "Nate",
"middle": [],
"last": "Kushman",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proc. of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Michael I Jordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional semantics. Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proc. of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A review of methods for automatic understanding of natural language mathematical problems",
"authors": [
{
"first": "Anirban",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Utpal",
"middle": [],
"last": "Garain",
"suffix": ""
}
],
"year": 2008,
"venue": "Artif. Intell. Rev",
"volume": "29",
"issue": "2",
"pages": "93--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anirban Mukherjee and Utpal Garain. 2008. A re- view of methods for automatic understanding of nat- ural language mathematical problems. Artif. Intell. Rev., 29(2):93-122.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Report on a general problem-solving program",
"authors": [
{
"first": "Allen",
"middle": [],
"last": "Newell",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Herbert A",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Simon",
"suffix": ""
}
],
"year": 1959,
"venue": "IFIP Congress",
"volume": "",
"issue": "",
"pages": "256--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen Newell, John C Shaw, and Herbert A Simon. 1959. Report on a general problem-solving program. In IFIP Congress, pages 256-264.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Solving general arithmetic word problems",
"authors": [
{
"first": "Subhro",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subhro Roy and Dan Roth. 2015. Solving general arith- metic word problems. In Proc. of EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatically solving number word problems by semantic parsing and reasoning",
"authors": [
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yuehui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaojiang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Rui",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically solving num- ber word problems by semantic parsing and reasoning. In Proc. of EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning with relaxed supervision",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steinhardt",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Steinhardt and P. Liang. 2015. Learning with relaxed supervision. In Proc. of NIPS.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SymPy: Python library for symbolic mathematics",
"authors": [
{
"first": "",
"middle": [],
"last": "Sympy Development Team",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sympy Development Team, 2016. SymPy: Python li- brary for symbolic mathematics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Max-margin markov networks",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2004. Max-margin markov networks. In Proc. of NIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Support vector machine learning for interdependent and structured output spaces",
"authors": [
{
"first": "I",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdepen- dent and structured output spaces. In Proc. of ICML.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Annotating derivations: A new evaluation strategy and dataset for algebra word problems",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay and Ming-Wei Chang. 2016. Annotating derivations: A new evaluation strat- egy and dataset for algebra word problems. In https://aka.ms/derivationpaper.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Semantic parsing via staged query graph generation: Question answering with knowledge base",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jian- feng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowl- edge base. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning structural SVMs with latent variables",
"authors": [
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Yu and T. Joachims. 2009. Learning structural SVMs with latent variables. In Proc. of ICML.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning to parse database queries using inductive logic proramming",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic proramming. In Proc. of AAAI.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to log- ical form. In EMNLP-CoNLL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learn to solve algebra word problems using quadratic programming",
"authors": [
{
"first": "Lipu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shuaixiang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lipu Zhou, Shuaixiang Dai, and Liwei Chen. 2015. Learn to solve algebra word problems using quadratic programming. In Proc. of EMNLP.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>Sematic</td><td>Derived</td><td/><td/><td>Sematic</td><td>Derived</td><td/></tr><tr><td/><td>Parses</td><td>Solutions</td><td/><td/><td>Parses</td><td>Solutions</td><td/></tr><tr><td/><td>y 1</td><td>z 1</td><td/><td/><td>y 1</td><td>z 1</td><td/></tr><tr><td>Input</td><td>y 2</td><td>z 2</td><td>Input</td><td/><td>y 2</td><td>z 2</td><td>Annotated Response</td></tr><tr><td>x1</td><td>y *</td><td>z *</td><td>x2</td><td>?</td><td>y 3</td><td>z 3</td><td>z2 *</td></tr><tr><td/><td>y 4</td><td>z 4</td><td/><td/><td>y 4</td><td>z 4</td><td/></tr><tr><td/><td>...</td><td>...</td><td/><td/><td>...</td><td>...</td><td/></tr><tr><td/><td>y 17650</td><td>z 17650</td><td/><td/><td>y 17650</td><td>z 17650</td><td/></tr><tr><td colspan=\"2\">Figure 1:</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "Left: Explicit supervision signals. Note that the solution z can be derived by the semantic parses y."
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Notation used in this paper to formally describe the problem of mapping algebra word problems to equations."
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "The statistics of the data sets."
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "The solution accuracies of different protocols on ALG-514 and DRAW-1K."
}
}
}
}