Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:33:56.145875Z"
},
"title": "Compiling Comp Ling: Practical Weighted Dynamic Programming and the Dyna Language *",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Eric",
"middle": [],
"last": "Goldlust",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives general agenda-based algorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code. * We thank Joshua Goodman, David McAllester, and Paul Ruhlen for useful early discussions; pioneer users Markus Dreyer, David Smith, and Roy Tromble for their feedback and input; John Blatz for discussion of program transformations; and several reviewers for useful criticism.",
"pdf_parse": {
"paper_id": "H05-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives general agenda-based algorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code. * We thank Joshua Goodman, David McAllester, and Paul Ruhlen for useful early discussions; pioneer users Markus Dreyer, David Smith, and Roy Tromble for their feedback and input; John Blatz for discussion of program transformations; and several reviewers for useful criticism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we generalize some modern probabilistic parsing techniques to a broader class of weighted deductive algorithms. Our implemented system encapsulates these implementation techniques behind a clean interface-a small high-level specification language, Dyna, which compiles into C++ classes. This system should help the HLT community to experiment more easily with new models and algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The \"parsing as deduction\" framework (Pereira and Warren, 1983 ) is now over 20 years old. It provides an elegant notation for specifying a variety of parsing algorithms (Shieber et al., 1995) , including algorithms for probabilistic or other semiring-weighted parsing (Goodman, 1999) . In the parsing community, new algorithms are often stated simply as a set of deductive inference rules (Sikkel, 1997; Eisner and Satta, 1999) .",
"cite_spans": [
{
"start": 37,
"end": 62,
"text": "(Pereira and Warren, 1983",
"ref_id": "BIBREF31"
},
{
"start": 170,
"end": 192,
"text": "(Shieber et al., 1995)",
"ref_id": "BIBREF37"
},
{
"start": 269,
"end": 284,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF16"
},
{
"start": 390,
"end": 404,
"text": "(Sikkel, 1997;",
"ref_id": "BIBREF38"
},
{
"start": 405,
"end": 428,
"text": "Eisner and Satta, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic programming as deduction",
"sec_num": "1.1"
},
{
"text": "It is also straightforward to specify other NLP algorithms this way. Syntactic MT models, language models, and stack decoders can be easily described using deductive rules. So can operations on finitestate and infinite-state machines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic programming as deduction",
"sec_num": "1.1"
},
{
"text": "One might regard deductive inference as merely a helpful perspective for teaching old algorithms and thinking about new ones, linking NLP to logic and classical AI. Real implementations would then be carefully hand-coded in a traditional language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "That was the view ten years ago of finite-state machines-that FSMs were part of the theoretical backbone of CL, linking the field to the theory of computation. Starting in the mid-1990's, however, finite-state methods came to the center of applied NLP as researchers at Xerox, AT&T, Groningen and elsewhere improved the expressive power of FSMs by moving from automata to transducers, adding semiring weights, and developing powerful new regular-expression operators and algorithms for these cases. They also developed software. Karttunen et al. (1996) built an FSM toolkit that allowed construction of morphological analyzers for many languages. Mohri et al. (1998) built a weighted toolkit that implemented novel algorithms (e.g., weighted minimization, on-thefly composition) and scaled up to handle largevocabulary continuous ASR. At the same time, renewed community-wide interest in shallow methods for information extraction, chunking, MT, and dialogue processing meant that such off-the-shelf FS toolkits became the core of diverse systems used in cutting-edge research.",
"cite_spans": [
{
"start": 529,
"end": 552,
"text": "Karttunen et al. (1996)",
"ref_id": "BIBREF19"
},
{
"start": 647,
"end": 666,
"text": "Mohri et al. (1998)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "The weakness of FSMs, of course, is that they are only finite-state. One would like something like AT&T's FSM toolkit that also handles the various formalisms now under consideration for lexicalized grammars, non-context-free grammars, and syntaxbased MT-and hold the promise of extending to other formalisms and applications not yet imagined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "We believe that deductive inference should play the role of regular expressions and FSMs, providing the theoretical foundation for such an effort. Many engineering ideas in the field can be regarded, we 1. :-double item=0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "% declares that all item values are doubles, default is 0 2. constit(X,I,K) += rewrite(X,W) * word(W,I,K).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "% a constituent is either a word . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "3. constit(X,I,K) += rewrite(X,Y,Z) * constit(Y,I,J) * constit(Z,J,K). % . . . or a combination of two adjacent subconstituents 4. goal += constit(\"s\",0,N) whenever ?ends at(N). % a parse is any s constituent that covers the input string Figure 1 : A probabilistic CKY parser written in Dyna. Axioms are in boldface. believe, as ideas for how to specify, transform, or compile systems of inference rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The role of toolkits",
"sec_num": "1.2"
},
{
"text": "Any toolkit needs an interface. For example, FS toolkits offer a regular expression language. We propose a simple but Turing-complete language, Dyna, for specifying weighted deductive-inference algorithms. We illustrate it here by example; see http://dyna.org for more details and a tutorial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "The short Dyna program in Fig. 1 expresses the inside algorithm for PCFGs (i.e., the probabilistic generalization of CKY recognition). Its 3 inference rules schematically specify many equations, over an arbitrary number of unknowns. This is possible bcause the unknowns (items) have structured names (terms) such as constit(\"s\",0,3). They resemble typed variables in a C program, but we use variable instead to refer to the capitalized identifiers X, I, K, . . . in lines 2-4. Each rule gives a consequent on the left-hand side of the +=, which can be built by combining the antecedents on the right-hand side. 1 Lines 2-4 are equational schemas that specify how to compute the value of items such as constit (\"s\",0,3) from the values of other items. Using the summation operator +=, lines 2-3 say that for any X, I, and K, constit(X,I,K) is defined by summing over the remaining variables, as W rewrite(X,W)*word(W,I,K) + Y,Z,J rewrite(X,Y,Z)*constit(Y,I,J)*constit(Z,J,K). For example, constit(\"s\",0,3) is a sum of quantities such as rewrite(\"s\", \"np\", \"vp\")*constit(\"np\",0,1)*constit(\"vp\",1,3).",
"cite_spans": [
{
"start": 611,
"end": 612,
"text": "1",
"ref_id": null
},
{
"start": 709,
"end": 718,
"text": "(\"s\",0,3)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 26,
"end": 32,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "The whenever operator in line 4 specifies a side condition that restricts the set of expressions in the sum (i.e., only when N is the sentence length).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "To fully define the system of equations, nondefault values (in this case, non-zero values) should be asserted for some axioms at runtime. (Axioms, shown in bold in Fig. 1 , are items that never appear 1 Much of our notation and terminology comes from logic programming: term, variable, inference rule, antecedent/consequent, assert/retract, axiom/theorem. as a consequent.) If the PCFG contains a rewrite rule np \u2192 Mary with probability p(Mary | np)=0.005, the user should assert that rewrite(\"np\", \"Mary\") has value 0.005. If the input is John loves Mary, values of 1 should be asserted for word(\"John\",0,1), word(\"loves\",1,2), word(\"Mary\",2,3), and ends at(3).",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 170,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "Given the axioms as base cases, the equations in Fig. 1 enable deduction of values for other items. The value of the theorem constit(\"s\",0,3) will be the inside probability \u03b2 s (0, 3), 2 and the value of goal will be the total probability of all parses.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 55,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "If one replaces += by max= throughout, then constit(\"s\",0,3) will accumulate the maximum rather than the sum of these quantities, and goal will accumulate the probability of the best parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "With different input, the same program carries out lattice parsing. Simply assert axioms that correspond to (weighted) lattice arcs, such as word (\"John\", 17, 50) , where 17 and 50 are arbitrary terms denoting states in the lattice. It is also quite straightforward to lexicalize the nonterminals or extend to synchronous grammars.",
"cite_spans": [
{
"start": 146,
"end": 154,
"text": "(\"John\",",
"ref_id": null
},
{
"start": 155,
"end": 158,
"text": "17,",
"ref_id": null
},
{
"start": 159,
"end": 162,
"text": "50)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "A related context-free parsing strategy, shown in Fig. 2 , is Earley's algorithm. These equations illustrate nested terms such as lists. The side condition in line 2 prevents building any constituent until one has built a left context that calls for it.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 56,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Language for Deductive Systems",
"sec_num": "2"
},
{
"text": "There is a large relevant literature. Some of the wellknown CL papers, notably Goodman (1999) , were already mentioned in section 1.1. Our project has three main points of difference from these. First, we provide an efficient, scalable, opensource implementation, in the form of a compiler from Dyna to C++ classes. (Related work is in \u00a77.2.) The C++ classes are efficient and easy to use, with statements such as c[rewrite(\"np\",2,3)]=0.005 to assert axiom values into a chart named c (i.e., a deduc-1. need(''s'',0) = 1. % begin by looking for an s that starts at position 0 2. constit(Nonterm/Needed,I,I) += rewrite(Nonterm,Needed) whenever ?need(Nonterm, I).",
"cite_spans": [
{
"start": 79,
"end": 93,
"text": "Goodman (1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "% traditional predict step 3. constit(Nonterm/Needed,I,K) += constit(Nonterm/cons(W,Needed),I,J) * word(W,J,K). % traditional scan step 4. constit(Nonterm/Needed,I,K) += constit(Nonterm,cons(X,Needed),I,J) * constit(X/nil,J,K). % traditional complete step 5. goal += constit(\"s\"/nil,0,N) whenever ?ends at(N). % we want a complete s constituent covering the sentence 6. need(Nonterm,J) += constit( /cons(Nonterm, ), ,J). % Note: underscore matches anything (anonymous wildcard) Figure 2 : An Earley parser that recovers inside probabilities (Earley, 1970; Stolcke, 1995) . The rule np \u2192 det n should be encoded as the axiom rewrite(\"np\",cons(\"det\",cons(\"n\",nil))), a nested term. \"np\"/Needed is the label of a partial np constituent that is still missing the list of subconstituents in Needed. need(\"np\",3) is derived if some partial constituent seeks an np subconstituent starting at position 3. As in Fig. 1 , lattice parsing comes for free, as does training.",
"cite_spans": [
{
"start": 541,
"end": 555,
"text": "(Earley, 1970;",
"ref_id": "BIBREF8"
},
{
"start": 556,
"end": 570,
"text": "Stolcke, 1995)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 478,
"end": 486,
"text": "Figure 2",
"ref_id": null
},
{
"start": 903,
"end": 909,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "tive database) and expressions like c[goal] to extract the values of the resulting theorems, which are computed as needed. The C++ classes also give access to the proof forest (e.g., the forest of parse trees), and integrate with parameter optimization code. Second, we fully generalize the agenda-based strategy of Shieber et al. (1995) to the weighted case-in particular supporting a prioritized agenda. That allows probabilities to guide the search for the best parse(s), a crucial technique in state-of-theart context-free parsers. 3 We also give a \"reverse\" agenda algorithm to compute gradients or outside probabilities for parameter estimation.",
"cite_spans": [
{
"start": 316,
"end": 337,
"text": "Shieber et al. (1995)",
"ref_id": "BIBREF37"
},
{
"start": 536,
"end": 537,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "Third, regarding weights, the Dyna language is designed to express systems of arbitrary, heterogeneous equations over item values. In previous work such as (Goodman, 1999; Nederhof, 2003) , one only specifies the inference rules as unweighted Horn clauses, and then weights are added automatically in a standard way: all values have the same type W, and all rules transform to equations of the form c \u2295= a 1 \u2297 a 2 \u2297 \u2022 \u2022 \u2022 \u2297 a k , where \u2295 and \u2297 give W the structure of a semiring. 4 In Dyna one writes these equations explicitly in place of Horn clauses (Fig. 1) . Accordingly, heterogeneous Dyna programs, to be supported soon by our compiler, will allow items of different types to have values of different types, computed by different aggregation operations over arbitrary right-hand-side ex-3 Previous treatments of weighted deduction have used an agenda only for an unweighted parsing phase (Goodman, 1999) or for finding the single best parse (Nederhof, 2003) . Our algorithm works in arbitrary semirings, including non-idempotent ones, taking care to avoid double-counting of weights and to handle side conditions. 4 E.g., the inside algorithm in Fig. 1 falls into Goodman's framework, with W, \u2295, \u2297 = R \u22650 , +, * -the PLUSTIMES semiring. Because \u2297 distributes over \u2295 in a semiring, computing goal is equivalent to an aggregation over many separate parse trees. That is not the case for heterogeneous programs.",
"cite_spans": [
{
"start": 156,
"end": 171,
"text": "(Goodman, 1999;",
"ref_id": "BIBREF16"
},
{
"start": 172,
"end": 187,
"text": "Nederhof, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 480,
"end": 481,
"text": "4",
"ref_id": null
},
{
"start": 895,
"end": 910,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF16"
},
{
"start": 948,
"end": 964,
"text": "(Nederhof, 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 553,
"end": 561,
"text": "(Fig. 1)",
"ref_id": null
},
{
"start": 1153,
"end": 1159,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "pressions. This allows specification of a wider class of algorithms from NLP and elsewhere (e.g., minimum expected loss decoding, smoothing formulas, neural networks, game tree analysis, and constraint programming). Although \u00a74 and \u00a75 have space to present only techniques for the semiring case, these can be generalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "Our approach may be most closely related to deductive databases, which even in their heyday were apparently ignored by the CL community (except for Minnen, 1996) . Deductive database systems permit inference rules that can derive new database facts from old ones. 5 They are essentially declarative logic programming languages (with restrictions or extensions) that are-or could be-implemented using efficient database techniques. Some implemented deductive databases such as CORAL (Ramakrishnan et al., 1994) and LOLA (Zukowski and Freitag, 1997 ) support aggregation (as in Dyna's +=, log+=, max=, . . . ), although only \"stratified\" forms of it that exclude unary CFG rule cycles. 6 Ross and Sagiv (1992) (and in a more restricted way, Kifer and Subrahmanian, 1992) come closest to our notion of attaching aggregable values to terms.",
"cite_spans": [
{
"start": 148,
"end": 161,
"text": "Minnen, 1996)",
"ref_id": "BIBREF26"
},
{
"start": 482,
"end": 509,
"text": "(Ramakrishnan et al., 1994)",
"ref_id": "BIBREF34"
},
{
"start": 519,
"end": 546,
"text": "(Zukowski and Freitag, 1997",
"ref_id": "BIBREF49"
},
{
"start": 684,
"end": 685,
"text": "6",
"ref_id": null
},
{
"start": 739,
"end": 768,
"text": "Kifer and Subrahmanian, 1992)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "Among deductive or other database systems, Dyna is perhaps unusual in that its goal is not to support transactional databases or ad hoc queries, but rather to serve as an abstract layer for specifying an algorithm, such as a dynamic programming (DP) algorithm. Thus, the Dyna program already implicitly or explicitly specifies all queries that will be needed. This allows compilation into a hard-coded C++ implementation. The compiler's job is to support these queries by laying out and indexing the database re-lations in memory 7 in a way that resembles handdesigned data structures for the algorithm in question. The compiler has many choices to make here; we ultimately hope to implement feedback-directed optimization, using profiled sample runs on typical data. For example, a sparse grammar should lead to different strategies than a dense one. Fig. 1 specifies a set of equations but not how to solve them. Any declarative specification language must be backed up by a solver for the class of specifiable problems. In our continuing work to develop a range of compiler strategies for arbitrary Dyna programs, we have been inspired by the CL community's experience in building efficient parsers.",
"cite_spans": [],
"ref_spans": [
{
"start": 852,
"end": 858,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation to Previous Work",
"sec_num": "3"
},
{
"text": "In this paper and in our current implementation, we give only the algorithms for what we call weighted dynamic programs, in which all axioms and theorems are variable-free. This means that a consequent may only contain variables that already appear elsewhere in the rule. We further restrict to semiring-weighted programs as in (Goodman, 1999) . But with a few more tricks not given here, the algorithms can be generalized to a wider class of heterogeneous weighted logic programs. 8",
"cite_spans": [
{
"start": 328,
"end": 343,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Theorem Values",
"sec_num": "4"
},
{
"text": "Computation is triggered when the user requests the value of one or more particular items, such as goal. Our algorithm must have several properties in order to substitute for manually written code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Desired properties",
"sec_num": "4.1"
},
{
"text": "Soundness. The algorithm cannot be guaranteed to terminate (since it is possible to write arbitrary Turing machines in Dyna). However, if it does terminate, it should return values from a valid model of the program, i.e., values that simultaneously satisfy all the equations expressed by the program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Desired properties",
"sec_num": "4.1"
},
{
"text": "Reasonable completeness. The computation should indeed terminate for programs of interest to the NLP community, such as parsing under a probabilistic grammar-even if the grammar has 7 Some relations might be left unmaterialized and computed on demand, with optional memoization and flushing of memos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Desired properties",
"sec_num": "4.1"
},
{
"text": "8 Heterogeneous programs may propagate non-additive updates, which arbitrarily modify one of the inputs to an aggregation. Non-dynamic programs require non-ground items in the chart, complicating both storage and queries against the chart. left recursion, unary rule cycles, or -productions. This appears to rule out pure top-down (\"backwardchaining\") approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Desired properties",
"sec_num": "4.1"
},
{
"text": "Efficiency. Returning the value of goal should do only as much computation as necessary. To return goal, one may not need to compute the values of all items. 9 In particular, finding the best parse should not require finding all parses (in contrast to Goodman (1999) and Zhou and Sato (2003) ). Approximation techniques such as pruning and bestfirst search must also be supported for practicality.",
"cite_spans": [
{
"start": 252,
"end": 266,
"text": "Goodman (1999)",
"ref_id": "BIBREF16"
},
{
"start": 271,
"end": 291,
"text": "Zhou and Sato (2003)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Desired properties",
"sec_num": "4.1"
},
{
"text": "Our basic algorithm (Fig. 3) is a weighted agendabased algorithm that works only with rules of the form c \u2295= a1 \u2297 a2 \u2297 \u2022 \u2022 \u2022 \u2297 a k . \u2297 must distribute over \u2295. Further, the default value for items (line 1 of Fig. 1 ) must be the semiring's zero element, denoted 0. 10 Agenda-based deduction maintains two indexed data structures: the agenda and the chart. chart [a] stores the current value of item a. The agenda holds future work that arises from assertions or from previous changes to the chart: agenda[a] stores an incremental update to be added (using \u2295) to chart [a] in future. If chart [a] or agenda [a] is not stored, it is taken to be the default 0.",
"cite_spans": [
{
"start": 361,
"end": 364,
"text": "[a]",
"ref_id": null
},
{
"start": 567,
"end": 570,
"text": "[a]",
"ref_id": null
},
{
"start": 591,
"end": 594,
"text": "[a]",
"ref_id": null
},
{
"start": 605,
"end": 608,
"text": "[a]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "(Fig. 3)",
"ref_id": "FIGREF0"
},
{
"start": 207,
"end": 213,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The agenda algorithm",
"sec_num": "4.2"
},
{
"text": "When item a is removed from the agenda, its chart weight is updated by the increment value. This change is then propagated to other items c, via rules of the form c \u2295= \u2022 \u2022 \u2022 with a on the right-hand-side. The resulting changes to c are placed back on the agenda and carried out only later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The agenda algorithm",
"sec_num": "4.2"
},
{
"text": "The unweighted agenda-based algorithm (Shieber et al., 1995) may be regarded as the case where W, \u2295, \u2297 = {T, F }, \u2228, \u2227 . It has previously been generalized (Nederhof, 2003) to the case W, \u2295, \u2297 = R \u22650 , max, + . In Fig. 3 , we make the natural further generalization to any semiring.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "(Shieber et al., 1995)",
"ref_id": "BIBREF37"
},
{
"start": 156,
"end": 172,
"text": "(Nederhof, 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 214,
"end": 220,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The agenda algorithm",
"sec_num": "4.2"
},
{
"text": "How is this a further generalization? Since \u2295 (unlike \u2228 and max) might not be idempotent, we must take care to avoid erroneous double-counting if the antecedent a combines with, or produces, another copy of itself. 11 For instance, if the input contains words, line 2 of Fig. 1 may get instantiated as constit(\"np\",5,5) += rewrite(\"np\",\"np\",\"np\") * constit(\"np\",5,5) * constit (\"np\",5,5) . This is why we save the old values of agenda [a] and chart [a] as \u2206 and old, and why line 12 is complex.",
"cite_spans": [
{
"start": 377,
"end": 387,
"text": "(\"np\",5,5)",
"ref_id": null
},
{
"start": 435,
"end": 438,
"text": "[a]",
"ref_id": null
},
{
"start": 449,
"end": 452,
"text": "[a]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 271,
"end": 277,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The agenda algorithm",
"sec_num": "4.2"
},
{
"text": "We now extend Fig. 3 to handle Dyna's side conditions, i.e., rules of the form c \u2295= expression whenever boolean-expression.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 20,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "We discuss only the simple side conditions treated in previous literature, which we write as c \u2295= a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "1 \u2297a 2 \u2297\u2022 \u2022 \u2022\u2297a k whenever ?b k +1 & \u2022 \u2022 \u2022 & ?b k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "Here, ?b j is true or false according to whether there exists an unweighted proof of b j . Again, what is new here? Nederhof (2003) considers only max= with a uniform-cost agenda discipline (see \u00a74.5), which guarantees that no item will be removed more than once from the agenda. We wish to support other cases, so we must take care that a second update to a i will not retrigger rules of which a i is a side condition.",
"cite_spans": [
{
"start": 116,
"end": 131,
"text": "Nederhof (2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "For simplicity, let us reformulate the above rule as c \u2295= a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "1 \u2297 a 2 \u2297 \u2022 \u2022 \u2022 \u2297 a k \u2297 ?b k +1 \u2297 \u2022 \u2022 \u2022 \u2297 ?b k ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "where ?b i is now treated as having value 0 or 1 (the identity for \u2297) rather than false or true respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "We may now use Fig. 3 , but now any a j might have the form ?b j . Then in line 12, chart[a j ] will be chart[?b j ], which is defined as 1 or 0 according to whether chart[b j ] is stored (i.e., whether b j has been derived). Also, if a i = ?a at line 11 (rather than a i = a), then \u2206 in line 12 is replaced by \u2206?, where we have set \u2206? := chart[?a] at line 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 21,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Side conditions",
"sec_num": "4.3"
},
{
"text": "Whether the agenda algorithm halts depends on the Dyna program and the input. Like any other Turingcomplete language, Dyna gives you enough freedom to write undesirable programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence",
"sec_num": "4.4"
},
{
"text": "Most NLP algorithms do terminate, of course, and this remains true under the agenda algorithm. For typical algorithms, only finitely many different items (theorems) can be derived from a given finite input (set of axioms). 12 This ensures termination if one is doing unweighted deduction with W, \u2295, \u2297 = {T, F }, \u2228, \u2227 , since the test at line 7 ensures that no item is processed more than once. 13 The same test ensures termination if one is searching for the best proof or parse with (say) W, \u2295, \u2297 = R \u22650 , min, + , where values are negated log probabilities. Positive-weight cycles will not affect the min. (Negative-weight cycles, however, would correctly cause the computation to diverge; these do not arise with probabilities.)",
"cite_spans": [
{
"start": 394,
"end": 396,
"text": "13",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence",
"sec_num": "4.4"
},
{
"text": "If one is using W, \u2295, \u2297 = R \u22650 , +, * to compute the total weight of all proofs or parses, as in the inside algorithm, then Dyna must solve a system of nonlinear equations. The agenda algorithm does this by iterative approximation (propagating updates around any cycles in the proof graph until numerical convergence), essentially as suggested by Stolcke (1995) for the case of Earley's algorithm. 14 Again, the computation may diverge.",
"cite_spans": [
{
"start": 347,
"end": 361,
"text": "Stolcke (1995)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence",
"sec_num": "4.4"
},
{
"text": "One can declare the conditions under which items of a particular type (constit or goal) should be treated as having converged. Then asking for the value of goal will run the agenda algorithm not until the agenda is empty, but only until chart [goal] has converged by this criterion.",
"cite_spans": [
{
"start": 243,
"end": 249,
"text": "[goal]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convergence",
"sec_num": "4.4"
},
{
"text": "The order in which items are chosen at line 4 does not affect the soundness of the agenda algorithm, but can greatly affect its speed. We implement the agenda as a priority queue whose priority function may be specified by the user. 15 Charniak et al. (1998) and Caraballo and Charniak (1998) showed that, when seeking the best parse (using min= or max=), best-first parsing can be extremely effective. Klein and Manning (2003a) went on to describe admissible heuristics and an A* framework for parsing. For A* in our general framework, the priority of item a should be an estimate of the value of the best proof of goal that uses a. (This non-standard formulation is carefully chosen. 16 ) If so, goal is guaranteed to converge the very first time it is selected from the priority-queue agenda.",
"cite_spans": [
{
"start": 233,
"end": 235,
"text": "15",
"ref_id": null
},
{
"start": 245,
"end": 258,
"text": "et al. (1998)",
"ref_id": null
},
{
"start": 263,
"end": 292,
"text": "Caraballo and Charniak (1998)",
"ref_id": "BIBREF3"
},
{
"start": 403,
"end": 428,
"text": "Klein and Manning (2003a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prioritization",
"sec_num": "4.5"
},
{
"text": "Prioritizing \"good\" items first can also be useful in other circumstances. The inside-outside training algorithm requires one to find all parses, but finding the high-probability parses first allows one to ignore the rest by \"early stopping.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prioritization",
"sec_num": "4.5"
},
{
"text": "In all these schemes (even A*), processing promising items as soon as possible risks having to reprocess them if their values change later. Thus, this strategy should be balanced against the \"topological sort\" strategy of waiting to process an item until its value has (probably) converged. 17 Ulti- 15 At present by writing a C++ function; ultimately within Dyna, by defining items such as priority(constit(\"s\",0,3)).",
"cite_spans": [
{
"start": 300,
"end": 302,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prioritization",
"sec_num": "4.5"
},
{
"text": "16 It is correct for proofs that incorporate two copies of a's value, or-more important-no copies of a's value because a is a side condition. Thus, it recognizes that a low-probability item must have high priority if it could be used as a side condition in a higher-probability parse (though this cannot happen for the side conditions derived by the magic templates transformation ( \u00a76)). Note also that a's own value (Nederhof, 2003) might not be an optimistic estimate, if negative weights are present.",
"cite_spans": [
{
"start": 418,
"end": 434,
"text": "(Nederhof, 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prioritization",
"sec_num": "4.5"
},
{
"text": "17 In parsing, for example, one often processes narrower constituents before wider ones. But such strategies do not always exist, or break down in the presence of unary rule cycles, or cannot be automatically found. Goodman's (1999) strategy of building all items and sorting them before computing any weights is wise only if one genuinely wants to build all items. mately we hope to learn priority functions that effectively balance these two strategies (especially in the context of early stopping).",
"cite_spans": [
{
"start": 216,
"end": 232,
"text": "Goodman's (1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prioritization",
"sec_num": "4.5"
},
{
"text": "The crucial work in Fig. 3 occurs in the iteration over instantiated rules at lines 9-11. In practice, we restructure this triply nested loop as follows, where each line retains the variable bindings that result from the unification in the previous line:",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 26,
"text": "Fig. 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Matching, indexing, and interning",
"sec_num": "4.6"
},
{
"text": "9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, indexing, and interning",
"sec_num": "4.6"
},
{
"text": "for each antecedent pattern ai that appears in some program rule r and unifies with a Our implementation of line 9 tests a against all of the antecedent patterns at once, using a tree of simple \"if\" tests (generated by the Dyna-to-C++ compiler) to share work across patterns. As an example, a = constit(\"np\",3,8) will match two antecedents at line 3 of Fig. 1 , but will fail to match in line 4. Because a is variable-free (for DPs), a full unification algorithm is not necessary, even though an antecedent pattern can contain repeated variables and nested subterms.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 359,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Matching, indexing, and interning",
"sec_num": "4.6"
},
{
"text": "Line 10 rapidly looks up the rule's other antecedents using indices that are automatically maintained on the chart. For example, once constit(\"np\",4,8) has matched antecedent 2 of line 3 of Fig. 1 , the compiled code consults a maintained list of the chart constituents that start at position 8 (i.e., items of the form constit(Z,8,K) that have already been derived). Suppose one of these is constit (\"vp\",8,15) : then the code finds the rule's remaining antecedent by consulting a list of items of the form rewrite(X,\"np\",\"vp\"). That leads it to construct consequents such as constit(\"s\",4,15) at line 11.",
"cite_spans": [
{
"start": 400,
"end": 411,
"text": "(\"vp\",8,15)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 190,
"end": 196,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Matching, indexing, and interning",
"sec_num": "4.6"
},
{
"text": "By default, equal terms are represented by equal pointers. While this means terms must be \"interned\" when constructed (requiring hash lookup), it enforces structure-sharing and allows any term to be rapidly copied, hashed, or equality-tested without dereferencing the pointer. 18 Each of the above paragraphs conceals many decisions that affect runtime. This presents future opportunities for feedback-directed optimization, where profiled runs on typical data influence the compiler.",
"cite_spans": [
{
"start": 277,
"end": 279,
"text": "18",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matching, indexing, and interning",
"sec_num": "4.6"
},
{
"text": "The value of goal is a function of the axioms' values. If the function is differentiable, we may want to get its gradient with respect to its parameters (the axiom values), to aid in numerically optimizing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Gradients",
"sec_num": "5"
},
{
"text": "The gradient computation can be derived from the original by a program transformation. For each item a in the original program-in particular, for each axiom-the new program will also compute a new item g(a), whose value is \u2202goal/\u2202a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by symbolic differentiation",
"sec_num": "5.1"
},
{
"text": "Thus, given weighted axioms, the new program computes both goal and \u2207goal. An optimization algorithm such as conjugate gradient can use this information to tune the axiom weights to maximize goal. An alternative is the EM algorithm (Dempster et al., 1977) for probabilistic generative models such as PCFGs. Luckily the same program serves, since for such models, the E count (expected count) of an item a can be found as a \u2022 g(a)/goal. In other words, the inside-outside algorithm has the same structure as computing the function and its gradient.",
"cite_spans": [
{
"start": 232,
"end": 255,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by symbolic differentiation",
"sec_num": "5.1"
},
{
"text": "The GRADIENT transformation is simple. For example, 19 given a rule",
"cite_spans": [
{
"start": 52,
"end": 54,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by symbolic differentiation",
"sec_num": "5.1"
},
{
"text": "c += a 1 * a 2 * \u2022 \u2022 \u2022 * a k whenever ?b k +1 & \u2022 \u2022 \u2022 & ?b k , we add a new rule g(a i ) += g(c) * a 1 * \u2022 \u2022 \u2022 * a i\u22121 * a i+1 * \u2022 \u2022 \u2022 * a k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by symbolic differentiation",
"sec_num": "5.1"
},
{
"text": "whenever ?a i , for each i = 1, 2, ..., k . (The original rule remains, since we need inside values to compute outside values.) This strategy for computing the gradient \u2202goal/\u2202a via the chain rule is an example of automatic differentiation in the reverse mode (Griewank and Corliss, 1991) , known in the neural network community as back-propagation.",
"cite_spans": [
{
"start": 260,
"end": 288,
"text": "(Griewank and Corliss, 1991)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by symbolic differentiation",
"sec_num": "5.1"
},
{
"text": "However, what if goal might be computed only approximately, by early stopping before convergence ( \u00a74.5)? To avoid confusing the optimizer, we want the exact gradient of the approximate function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by back-propagation",
"sec_num": "5.2"
},
{
"text": "To do this, we \"unwind\" the computation of goal, undoing the value updates while building up the gradient values. The idea is to differentiate an \"unrolled\" version of the original computation (Williams and Zipser, 1989) , in which an item at 19 More generally, g(ai) = \u2202goal/\u2202ai = P c \u2202goal/\u2202c \u2022 \u2202c/\u2202ai = P c g(c) \u2022 \u2202c/\u2202ai by the chain rule. Figure 4 : An efficient algorithm for computing \u2207goal (even when goal is an early-stopping approximation), specialized to the case W, \u2295, \u2297 = R, +, * . The proof is suppressed for lack of space. time t is considered to be a different variable (possibly with different value) than the same item at time t + 1. The reverse pass must recover earlier values. Our somewhat tricky algorithm is shown in Fig. 4 .",
"cite_spans": [
{
"start": 193,
"end": 220,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF46"
},
{
"start": 243,
"end": 245,
"text": "19",
"ref_id": null
}
],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 4",
"ref_id": null
},
{
"start": 739,
"end": 745,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gradients by back-propagation",
"sec_num": "5.2"
},
{
"text": "At line 3, a stack is needed to remember the sequence of a, old, \u2206 triples from the original computation. 20 It is a more efficient version of the \"tape\" usually used in automatic differentiation. For example, it uses O(n 2 ) rather than O(n 3 ) space for the CKY algorithm. The trick is that Fig. 3 does not record all its computations, but only its sequence of items. Fig. 4 then re-runs the inference rules to reconstruct the computations in an acceptable order.",
"cite_spans": [
{
"start": 106,
"end": 108,
"text": "20",
"ref_id": null
}
],
"ref_spans": [
{
"start": 293,
"end": 299,
"text": "Fig. 3",
"ref_id": "FIGREF0"
},
{
"start": 370,
"end": 376,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gradients by back-propagation",
"sec_num": "5.2"
},
{
"text": "This method is a generalization of Eisner's (2001) prioritized forward-backward algorithm for infinitestate machines. As Eisner (2001) pointed out, the tape created on the first forward pass can also be used to speed up later passes (i.e., after the numerical optimizer has adjusted the axiom weights). 21",
"cite_spans": [
{
"start": 121,
"end": 134,
"text": "Eisner (2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradients by back-propagation",
"sec_num": "5.2"
},
{
"text": "To support parameter training using these gradients, our implementation of Dyna includes a training module, DynaMITE. DynaMITE supports the EM algorithm (and many variants), supervised and unsupervised training of log-linear (\"maximum entropy\") models using quasi-Newton methods, and smoothing-parameter tuning on development data. As an object-oriented C++ library, it also facilitates rapid implementation of new estimation techniques Smith and Eisner, 2005) .",
"cite_spans": [
{
"start": 437,
"end": 460,
"text": "Smith and Eisner, 2005)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "5.3"
},
{
"text": "Another interest of Dyna is that its high-level specifications can be manipulated by mechanical sourceto-source program transformations. This makes it possible to derive new algorithms from old ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "\u00a75.1 already sketched the gradient transformation for finding \u2207goal. We note a few other examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "Bounding transformations generate a new program that computes upper or lower bounds on goal, via generic bounding techniques (Prieditis, 1993; Culberson and Schaeffer, 1998) . The A* heuristics explored by Klein and Manning (2003a) can be seen as resulting from bounding transformations.",
"cite_spans": [
{
"start": 125,
"end": 142,
"text": "(Prieditis, 1993;",
"ref_id": "BIBREF33"
},
{
"start": 143,
"end": 173,
"text": "Culberson and Schaeffer, 1998)",
"ref_id": "BIBREF6"
},
{
"start": 206,
"end": 231,
"text": "Klein and Manning (2003a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "With John Blatz, we are also exploring transformations that can result in asymptotically more efficient computations of goal. Their unweighted versions are well-known in the logic programming community (Tamaki and Sato, 1984; Ramakrishnan, 1991) . Folding introduces new intermediate items, perhaps exploiting the distributive law; applications include parsing speedups such as (Eisner and Satta, 1999) , as well as well-known techniques for speeding up multi-way database joins, constraint programming, or marginalization of graphical models. Unfolding eliminates items; it can be used to specialize a parser to a particular grammar and then to eliminate unary rules. Magic templates introduce top-down filtering into the search strategy and can be used to derive Earley's algorithm (Minnen, 1996) , to introduce left-corner filters, and to restrict FSM constructions to build only accessible states.",
"cite_spans": [
{
"start": 202,
"end": 225,
"text": "(Tamaki and Sato, 1984;",
"ref_id": "BIBREF45"
},
{
"start": 226,
"end": 245,
"text": "Ramakrishnan, 1991)",
"ref_id": "BIBREF35"
},
{
"start": 378,
"end": 402,
"text": "(Eisner and Satta, 1999)",
"ref_id": "BIBREF9"
},
{
"start": 784,
"end": 798,
"text": "(Minnen, 1996)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "Finally, there are low-level optimizations. Term constituents not in any good parse) by consulting gagenda [a] values that the previous backward pass can have written onto the tape (overwriting \u2206 or old).",
"cite_spans": [
{
"start": 107,
"end": 110,
"text": "[a]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "transformations restructure terms to change their layout in memory. We are also exploring the introduction of declarations that control which items use the agenda or are memoized in the chart. This can be used to support lazy or \"on-the-fly\" computation (Mohri et al., 1998) and asymptotic space-saving tricks (Binder et al., 1997) .",
"cite_spans": [
{
"start": 254,
"end": 274,
"text": "(Mohri et al., 1998)",
"ref_id": "BIBREF27"
},
{
"start": 310,
"end": 331,
"text": "(Binder et al., 1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "7 Usefulness of the Implementation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Program Transformations",
"sec_num": "6"
},
{
"text": "The current Dyna compiler has proved indispensable in our own recent projects, in the sense that we would not have attempted many of them without it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "7.1"
},
{
"text": "In some cases, we were experimenting with genuinely new algorithms not supported by any existing tool, as in our work on dependency-lengthlimited parsing (Eisner and Smith, 2005b) and loosely syntax-based machine translation (Eisner and D. Smith, 2005) . (Dyna would have been equally helpful in the first author's earlier work on new algorithms for lexicalized and CCG parsing, syntactic MT, transformational syntax, trainable parameterized FSMs, and finite-state phonology.)",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "(Eisner and Smith, 2005b)",
"ref_id": "BIBREF11"
},
{
"start": 225,
"end": 252,
"text": "(Eisner and D. Smith, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "7.1"
},
{
"text": "In other cases , Dyna let us quickly replicate, tweak, and combine useful techniques from the literature. These techniques included unweighted FS morphology, conditional random fields (Lafferty et al., 2001) , synchronous parsers (Wu, 1997; Melamed, 2003) , lexicalized parsers (Eisner and Satta, 1999) , 22 partially supervised training\u00e0 la (Pereira and Schabes, 1992) , 23 and grammar induction (Klein and Manning, 2002) . These replications were easy to write and extend, and to train via \u00a75.2.",
"cite_spans": [
{
"start": 184,
"end": 207,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF24"
},
{
"start": 230,
"end": 240,
"text": "(Wu, 1997;",
"ref_id": "BIBREF47"
},
{
"start": 241,
"end": 255,
"text": "Melamed, 2003)",
"ref_id": "BIBREF25"
},
{
"start": 278,
"end": 302,
"text": "(Eisner and Satta, 1999)",
"ref_id": "BIBREF9"
},
{
"start": 342,
"end": 369,
"text": "(Pereira and Schabes, 1992)",
"ref_id": "BIBREF30"
},
{
"start": 372,
"end": 374,
"text": "23",
"ref_id": null
},
{
"start": 397,
"end": 422,
"text": "(Klein and Manning, 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applications",
"sec_num": "7.1"
},
{
"text": "We compared the current Dyna compiler to handbuilt systems on a variety of parsing tasks. These problems were chosen not for their novelty or interesting structure, but for the availability of existing well-tuned implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "Best parse. We compared a Dyna CFG parser to the Java parser of Klein and Manning (2003b), 24 22 Markus Dreyer's reimplementation of the complex Collins (1999) parser uses under 30 lines of Dyna.",
"cite_spans": [
{
"start": 64,
"end": 93,
"text": "Klein and Manning (2003b), 24",
"ref_id": null
},
{
"start": 145,
"end": 159,
"text": "Collins (1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "23 For example, lines 2-3 of Fig. 1 can be extended with whenever permitted(X,I,K).",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 35,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "24 Neither uses heuristics from Klein and Manning (2003a) . on the same grammar. Fig. 5 shows the results. Dyna's disadvantage is greater on longer sentences-probably because its greater memory consumption results in worse cache behavior. 25 We also compared a Dyna CKY parser to our own hand-built implementation, C++PARSE. C++PARSE is designed like the Dyna parser but includes a few storage and indexing optimizations that Dyna does not yet have. Fig. 6 shows the 5fold speedup from these optimizations on binarized-Treebank parsing with a large 119K-rule grammar. The sharp diagonal indicates that C++PARSE is simply a better-tuned version of the Dyna parser.",
"cite_spans": [
{
"start": 32,
"end": 57,
"text": "Klein and Manning (2003a)",
"ref_id": "BIBREF22"
},
{
"start": 239,
"end": 241,
"text": "25",
"ref_id": null
}
],
"ref_spans": [
{
"start": 81,
"end": 87,
"text": "Fig. 5",
"ref_id": "FIGREF2"
},
{
"start": 450,
"end": 456,
"text": "Fig. 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "These optimizations and others are now being incorporated into the Dyna compiler, and are expected 25 Unlike Java, Dyna does not yet decide automatically when to perform garbage collection. In our experiment, garbage collection was called explicitly after each sentence and counted as part of the runtime (typically 0.25 seconds for 10-word sentences, 5 seconds for 40-word sentences).",
"cite_spans": [
{
"start": 99,
"end": 101,
"text": "25",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7.2"
},
{
"text": "99.99% uniform 89.3 (4.5) 90.3 (4.6) after 1 EM iteration 82.9 (6.8) 85.2 (6.9) after 2 EM iterations 77.1 (8.4) 79.1 (8.3) after 3 EM iterations 71.6 (9.4) 73.7 (9.5) after 4 EM iterations 66.8 (10.0) 68.8 (10.2) after 5 iterations 62.9 (10.3) 65.0 (10.5) to provide similar speedups, putting Dyna's parser in the ballpark of the Klein & Manning parser. Importantly, these improvements will speed up existing Dyna programs through recompilation.",
"cite_spans": [
{
"start": 107,
"end": 112,
"text": "(8.4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "99%",
"sec_num": null
},
{
"text": "Inside parsing. Johnson (2000) provides a C implementation of the inside-outside algorithm for EM training of PCFGs. We ran five iterations of EM on the WSJ10 corpus 26 using the Treebank grammar from that corpus. Dyna took 4.1 times longer.",
"cite_spans": [
{
"start": 16,
"end": 30,
"text": "Johnson (2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "99%",
"sec_num": null
},
{
"text": "Early stopping. An advantage of the weighted agenda discipline ( \u00a74.2) is that, with a reasonable priority function such as an item's inside probability, the inside algorithm can be stopped early with an estimate of goal's value. To measure the goodness of this early estimate, we tracked the progression of goal's value as each sentence was being parsed. In most instances, and especially after more EM iterations, the estimate was very tight long before all the weight had been accumulated (Table 1) . This suggests that early stopping is a useful training speedup.",
"cite_spans": [],
"ref_spans": [
{
"start": 492,
"end": 501,
"text": "(Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "99%",
"sec_num": null
},
{
"text": "PRISM. The implemented tool most similar to Dyna that we have found is PRISM (Zhou and Sato, 2003) , a probabilistic Prolog with efficient tabling and compilation. PRISM inherits expressive power from Prolog but handles only probabilities, not general semirings (or even side conditions). 27 In CKY parsing tests, PRISM was able to handle only a small fraction of the Penn Treebank ruleset (2,400 highprobability rules) and tended to crash on long sentences. Dyna is designed for real-world use: it consistently parses over 10\u00d7 faster than PRISM and scales to full-sized problems.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Zhou and Sato, 2003)",
"ref_id": "BIBREF48"
},
{
"start": 289,
"end": 291,
"text": "27",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "99%",
"sec_num": null
},
{
"text": "IBAL (Pfeffer, 2001 ) is an elegant and powerful language for probabilistic modeling; it generalizes Bayesian networks in interesting ways. 28 Since PCFGs and marginalization can be succinctly expressed in IBAL, we attempted a performance comparison on the task of the inside algorithm (Fig. 1) . Unfortunately, IBAL's algorithm appears not to terminate if the PCFG contains any kind of recursion reachable from the start symbol.",
"cite_spans": [
{
"start": 5,
"end": 19,
"text": "(Pfeffer, 2001",
"ref_id": "BIBREF32"
},
{
"start": 140,
"end": 142,
"text": "28",
"ref_id": null
}
],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "(Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "99%",
"sec_num": null
},
{
"text": "Weighted deduction is a powerful theoretical formalism that encompasses many NLP algorithms (Goodman, 1999) . We have given a bottom-up \"inside\" algorithm for general semiring-weighted deduction, based on a prioritized agenda, and a general \"outside\" algorithm that correctly computes weight gradients even when the inside algorithm is pruned.",
"cite_spans": [
{
"start": 92,
"end": 107,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "We have also proposed a declarative language, Dyna, that replaces Prolog's Horn clauses with \"Horn equations\" over terms with values. Dyna can express more than the semiring-weighted dynamic programs treated in this paper. Our ongoing work concerns the full Dyna language, program transformations, and feedback-directed optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Finally, we evaluated our first implementation of a Dyna-to-C++ compiler (download and documentation at http://dyna.org). We hope it will facilitate EMNLP research, just as FS toolkits have done for the FS case. It produces code that is slower than hand-crafted code but acceptably fast for our NLP research, where it has been extremely helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "That is, the probability that s would stochastically rewrite to the first three words of the input. If this can happen in more than one way, the probability sums over multiple derivations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Often they use some variant of the unweighted agendabased algorithm, which is known in that community as \"seminaive bottom-up evaluation.\"6 An unweighted parser was implemented in an earlier version of LOLA(Specht and Freitag, 1995).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This also affects completeness, as it sometimes enables the computation of goal to terminate even if the program as a whole contains some irrelevant non-terminating computation. Even in practical cases, the runtime of computing all items is often prohibitive, e.g., proportional to n 6 or worse for a dense treeadjoining grammar or synchronous grammar.10 It satisfies x \u2295 0 = x, x \u2297 0 = 0 for all x. Also, this algorithm requires \u2297 to distribute over \u2295. Dyna's semantics requires \u2295 to be associative and commutative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "An agenda update that increases x by 0.3 will increase r * x * x by r * (0.6x + 0.09). Hence, the rule x += r * x * x must propagate a new increase of that size to x, via the agenda.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This holds for all Datalog programs, for instance.13 This argument does not hold if Dyna is used to express programs outside the semiring. In particular, one can write instances of SAT and other NP-hard constraint satisfaction problems by using cyclic rules with negation over finitely many boolean-valued items(Niemel\u00e4, 1998). Here the agenda algorithm can end up flipping values forever between false and true; a more general solver would have to be called in order to find a stable model of a SAT problem's equations.14 Still assuming the number of items is finite, one could in principle materialize the system of equations and call a dedicated numerical solver. In some special cases only a linear solver is needed: e.g., for unary rule cycles(Stolcke, 1995), or -cycles in FSMs(Eisner, 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The compiled code provides garbage collection on the terms; this is important when running over large datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If one is willing to risk floating-point error, then one can store only a, old on the stack and recover \u2206 as chart[a] \u2212 old. Also, agenda[a] and gagenda[a] can be stored in the same location, as they are only used during the forward and the backward pass, respectively.21 In brief, a later forward pass that chooses a atFig. 3, line 4 according to the recorded tape order (1) is faster than using a priority queue, (2) avoids ordering-related discontinuities in the objective function as the axiom weights change, (3) can prune by skipping useless updates a that scarcely affected goal (e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentences with \u226410 words, stripping punctuation",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sentences with \u226410 words, stripping punctuation.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Thus it can handle a subset of the cases described by Goodman (1999), again by building the whole parse forest. 28 It might be possible to implement IBAL in Dyna",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thus it can handle a subset of the cases described by Goodman (1999), again by building the whole parse forest. 28 It might be possible to implement IBAL in Dyna (Pfeffer,",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Space-efficient inference in dynamic probabilistic networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Binder",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Binder, K. Murphy, and S. Russell. 1997. Space-efficient inference in dynamic probabilistic networks. In Proc. of IJCAI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "New figures of merit for best-first probabilistic chart parsing",
"authors": [
{
"first": "S",
"middle": [
"A"
],
"last": "Caraballo",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1998,
"venue": "CL",
"volume": "24",
"issue": "2",
"pages": "275--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. A. Caraballo and E. Charniak. 1998. New figures of merit for best-first proba- bilistic chart parsing. CL, 24(2):275-298.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Edge-based best-first chart parsing",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, S. Goldwater, and M. Johnson. 1998. Edge-based best-first chart parsing. In Proc. of COLING-ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "M",
"middle": [
"J"
],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. J. Collins. 1999. Head-Driven Statistical Models for Natural Language Pars- ing. Ph.D. thesis, U. of Pennsylvania.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pattern databases. Computational Intelligence",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Culberson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schaeffer",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. C. Culberson and J. Schaeffer. 1998. Pattern databases. Computational Intelli- gence.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Maximum likelihood estimation from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society B",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94-102.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient parsing for bilexical CFGs and headautomaton grammars",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner and G. Satta. 1999. Efficient parsing for bilexical CFGs and head- automaton grammars. In Proc. of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner and D. A. Smith. 2005a. Quasi-synchronous grammars: Alignment by soft projection of syntactic dependencies. Technical report, Johns Hopkins U.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parsing with soft and hard constraints on dependency length",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner and N. A. Smith. 2005b. Parsing with soft and hard constraints on dependency length. In Proc. of IWPT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dyna is a lower-level language that itself knows nothing about the semantics of probability models, but whose inference rules could be used to implement any kind of message passing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "p.c.). Dyna is a lower-level language that itself knows nothing about the semantics of probability models, but whose inference rules could be used to implement any kind of message passing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dyna: A declarative language for implementing dynamic programs",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Goldlust",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner, E. Goldlust, and N. A. Smith. 2004. Dyna: A declarative language for implementing dynamic programs. In Proc. of ACL (companion vol.).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Smoothing a Probabilistic Lexicon via Syntactic Transformations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 2001. Smoothing a Probabilistic Lexicon via Syntactic Transforma- tions. Ph.D. thesis, U. of Pennsylvania.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parameter estimation for probabilistic FS transducers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 2002. Parameter estimation for probabilistic FS transducers. In Proc. of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semiring parsing. CL",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "25",
"issue": "",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goodman. 1999. Semiring parsing. CL, 25(4):573-605.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Differentiation of Algorithms",
"authors": [
{
"first": "A",
"middle": [],
"last": "Griewank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corliss",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Griewank and G. Corliss, editors. 1991. Automatic Differentiation of Algo- rithms. SIAM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Inside-outside",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson. 2000. Inside-outside (computer program).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Regular expressions for language engineering",
"authors": [
{
"first": "L",
"middle": [],
"last": "Karttunen",
"suffix": ""
},
{
"first": "J.-P",
"middle": [],
"last": "Chanod",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Schiller",
"suffix": ""
}
],
"year": 1996,
"venue": "JNLE",
"volume": "2",
"issue": "4",
"pages": "305--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Karttunen, J.-P. Chanod, G. Grefenstette, and A. Schiller. 1996. Regular ex- pressions for language engineering. JNLE, 2(4):305-328.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Theory of generalized annotated logic programming and its applications",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kifer",
"suffix": ""
},
{
"first": "V",
"middle": [
"S"
],
"last": "Subrahmanian",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Logic Programming",
"volume": "12",
"issue": "4",
"pages": "335--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Kifer and V. S. Subrahmanian. 1992. Theory of generalized annotated logic programming and its applications. Journal of Logic Programming, 12(4):335-368.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A generative constituent-context model for grammar induction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2002. A generative constituent-context model for grammar induction. In Proc. of ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A * parsing: Fast exact Viterbi parse selection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2003a. A * parsing: Fast exact Viterbi parse selec- tion. In Proc. of HLT-NAACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2003b. Accurate unlexicalized parsing. In Proc. of ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling sequence data. In Proc. of ICML.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multitext grammars and synchronous parsers",
"authors": [
{
"first": "I",
"middle": [
"D"
],
"last": "Melamed",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. D. Melamed. 2003. Multitext grammars and synchronous parsers. In Proc. HLT-NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Magic for filter optimization in dynamic bottom-up processing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Minnen",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Minnen. 1996. Magic for filter optimization in dynamic bottom-up processing. In Proc. of ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A rational design for a weighted FST library",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 1998,
"venue": "LNCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Mohri, F. Pereira, and M. Riley. 1998. A rational design for a weighted FST library. LNCS, 1436.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Weighted deductive parsing and Knuth's algorithm. CL",
"authors": [
{
"first": "M.-J",
"middle": [],
"last": "Nederhof",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "29",
"issue": "",
"pages": "135--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.-J. Nederhof. 2003. Weighted deductive parsing and Knuth's algorithm. CL, 29(1):135-143.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Logic programs with stable model semantics as a constraint programming paradigm",
"authors": [
{
"first": "I",
"middle": [],
"last": "Niemel\u00e4",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. Workshop on Computational Aspects of Nonmonotonic Reasoning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Niemel\u00e4. 1998. Logic programs with stable model semantics as a constraint programming paradigm. In Proc. Workshop on Computational Aspects of Nonmonotonic Reasoning.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Inside-outside reestimation from partially bracketed corpora",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pereira and Y. Schabes. 1992. Inside-outside reestimation from partially brack- eted corpora. In Proc. of ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Parsing as deduction",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "D",
"middle": [
"H D"
],
"last": "Warren",
"suffix": ""
}
],
"year": 1983,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pereira and D. H. D. Warren. 1983. Parsing as deduction. In Proc. of ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "IBAL: An integrated Bayesian agent language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pfeffer",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Pfeffer. 2001. IBAL: An integrated Bayesian agent language. In Proc. of IJCAI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Machine discovery of effective admissible heuristics. Machine Learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Prieditis",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "12",
"issue": "",
"pages": "117--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Prieditis. 1993. Machine discovery of effective admissible heuristics. Ma- chine Learning, 12:117-41.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The CORAL deductive system",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sudarshan",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Seshadri",
"suffix": ""
}
],
"year": 1994,
"venue": "The VLDB Journal",
"volume": "3",
"issue": "2",
"pages": "161--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Ramakrishnan, D. Srivastava, S. Sudarshan, and P. Seshadri. 1994. The CORAL deductive system. The VLDB Journal, 3(2):161-210.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Magic templates: a spellbinding approach to logic programs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
}
],
"year": 1991,
"venue": "J. Log. Program",
"volume": "11",
"issue": "3-4",
"pages": "189--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Ramakrishnan. 1991. Magic templates: a spellbinding approach to logic pro- grams. J. Log. Program., 11(3-4):189-216.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Monotonic aggregation in deductive databases",
"authors": [
{
"first": "K",
"middle": [
"A"
],
"last": "Ross",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagiv",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. A. Ross and Y. Sagiv. 1992. Monotonic aggregation in deductive databases. In Proc. of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Principles and implementation of deductive parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Logic Programming",
"volume": "24",
"issue": "1-2",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Shieber, Y. Schabes, and F. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1-2):3-36.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Parsing Schemata: A Framework for Specification and Analysis of Parsing Algorithms. Texts in Theoretical Computer Science",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sikkel",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sikkel. 1997. Parsing Schemata: A Framework for Specification and Analysis of Parsing Algorithms. Texts in Theoretical Computer Science. Springer.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Annealing techniques for unsupervised statistical language learning",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. A. Smith and J. Eisner. 2004. Annealing techniques for unsupervised statisti- cal language learning. In Proc. of ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bilingual parsing with factored estimation: Using English to parse Korean",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. A. Smith and N. A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Korean. In Proc. of EMNLP.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Context-based morphological disambiguation with random fields",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Tromble",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. A. Smith, D. A. Smith, and R. W. Tromble. 2005. Context-based morphologi- cal disambiguation with random fields. In Proc. of HLT-EMNLP.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "AMOS: A NL parser implemented as a deductive database in LOLA",
"authors": [
{
"first": "G",
"middle": [],
"last": "Specht",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": 1995,
"venue": "Applications of Logic Databases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Specht and B. Freitag. 1995. AMOS: A NL parser implemented as a deductive database in LOLA. In Applications of Logic Databases. Kluwer.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "An efficient probabilistic CF parsing algorithm that computes prefix probabilities",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1995,
"venue": "CL",
"volume": "21",
"issue": "2",
"pages": "165--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 1995. An efficient probabilistic CF parsing algorithm that computes prefix probabilities. CL, 21(2):165-201.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Unfold/fold transformation of logic programs",
"authors": [
{
"first": "H",
"middle": [],
"last": "Tamaki",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 1984,
"venue": "Proceedings Second International Conference on Logic Programming",
"volume": "",
"issue": "",
"pages": "127--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Tamaki and T. Sato. 1984. Unfold/fold transformation of logic programs. In S.\u00c5. T\u00e4rnlund, editor, Proceedings Second International Conference on Logic Programming, pages 127-138, Uppsala University.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Computation",
"volume": "1",
"issue": "2",
"pages": "270--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Williams and D. Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270-280.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "CL",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. CL, 23(3):377-404.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Toward a high-performance system for symbolic and statistical modeling",
"authors": [
{
"first": "N.-F",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of Workshop on Learning Statistical Models from Relational Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N.-F. Zhou and T. Sato. 2003. Toward a high-performance system for symbolic and statistical modeling. In Proc. of Workshop on Learning Statistical Models from Relational Data.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The deductive database system LOLA",
"authors": [
{
"first": "U",
"middle": [],
"last": "Zukowski",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": 1997,
"venue": "Logic Programming and Nonmonotonic Reasoning, LNAI 1265",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "U. Zukowski and B. Freitag. 1997. The deductive database system LOLA. In Logic Programming and Nonmonotonic Reasoning, LNAI 1265. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Weighted agenda-based deduction in a semiring, without side conditions (see text).",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "of simultaneously unifying r's remaining antecedent patterns a1, . . . ai\u22121, ai+1, . . . a k with items that may have non-0 value in the chart 11. construct r's consequent c (* all vars are bound *)",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Dyna CKY parser vs. Klein & Manning hand-built parser, comparing runtime.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Dyna CKY parser vs. C++PARSE, a similar handbuilt parser. The implementation differences amount to storage and indexing and give a consistent 5-fold speedup.",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>4.</td><td>choose such an a</td><td/><td/><td/></tr><tr><td>5.</td><td colspan=\"3\">\u2206 := agenda[a]; agenda[a] := 0</td><td/></tr><tr><td>6.</td><td/><td/><td/><td/></tr><tr><td>8.</td><td colspan=\"4\">(* compute new resulting updates and place them on the agenda *)</td></tr><tr><td>9.</td><td colspan=\"4\">for each inference rule \"c \u2295= a1 \u2297 a2 \u2297 \u2022 \u2022 \u2022 \u2297 a k \"</td></tr><tr><td>10.</td><td>for i from 1 to k</td><td/><td/><td/></tr><tr><td>11.</td><td colspan=\"4\">for each way of instantiating the rule's variables</td></tr><tr><td/><td>such that ai = a</td><td/><td/><td/></tr><tr><td>12.</td><td>agenda[c] \u2295=</td><td>k O j=1</td><td>8 &gt; &lt; &gt; :</td><td>old \u2206 chart[aj] otherwise if j &lt; i and aj = a if j = i</td></tr><tr><td/><td colspan=\"4\">(* can skip this line if any multiplicand is 0 *)</td></tr></table>",
"text": "1. for each axiom a, set agenda[a] := value of axiom a 2. while there is an item a with agenda[a] = 0 3. (* remove an item from the agenda and move its value to the chart *) := chart[a]; chart[a] := chart[a] \u2295 \u2206 7. if chart[a] = old (* only propagate actual changes *)",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"5\">1. for each a, gchart[a] := 0 and gagenda[a] := 0</td></tr><tr><td/><td/><td/><td/><td>(* \u2206 is agenda[a] *)</td></tr><tr><td>4.</td><td>\u0393 := gchart[a]</td><td/><td/><td>(* will accumulate gagenda[a] here *)</td></tr><tr><td>5.</td><td colspan=\"4\">for each inference rule \"c += a1 * a2 * \u2022 \u2022 \u2022 * a k \"</td></tr><tr><td>6.</td><td>for i from 1 to k</td><td/><td/><td/></tr><tr><td>7.</td><td colspan=\"4\">for each way of instantiating the rule's variables</td></tr><tr><td/><td colspan=\"2\">such that ai = a</td><td/><td/></tr><tr><td>8.</td><td colspan=\"4\">for h from 1 to k such that a h is not a side cond.</td></tr><tr><td/><td colspan=\"4\">(* find \u2202goal/\u2202agenda[c] \u2022 \u2202agenda[c]/\u2202(a h factor) *)</td></tr><tr><td>9.</td><td>\u03b3 :=</td><td>k Y</td><td>8 &gt; &gt; &gt; &lt;</td><td>gagenda[c] if j = h old if j = h and j &lt; i and aj = a</td></tr><tr><td/><td/><td>j=1</td><td>&gt; &gt; &gt; :</td><td>\u2206 chart[aj] otherwise if j = h and j = i</td></tr><tr><td>10.</td><td colspan=\"4\">if h = i then gchart[a h ] += \u03b3</td></tr><tr><td>11.</td><td colspan=\"4\">if h \u2264 i and a h = a then \u0393 += \u03b3</td></tr><tr><td>12.</td><td>gagenda[a] := \u0393</td><td/><td/><td/></tr><tr><td>13.</td><td>chart[a] := old</td><td/><td/><td/></tr><tr><td colspan=\"5\">14. return gagenda[a] for each axiom a</td></tr></table>",
"text": "(* respectively hold \u2202goal/\u2202chart[a] and \u2202goal/\u2202agenda[a] *) 2. gchart[goal] := 1 3. for each a, \u2206, old triple that was considered at line 8 ofFig. 3, but in the reverse order",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"text": "Early stopping. Each row describes a PCFG at a different stage of training; later PCFGs are sharper. The table shows the percentage of agenda runtime (mean across 1409 sentences, and standard deviation) required to get within 99% or 99.99% of the true value of goal.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}