Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:44:48.777758Z"
},
"title": "Training Tree Transducers",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way Marina del Rey",
"postCode": "90292",
"region": "CA"
}
},
"email": "[email protected]"
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many probabilistic models for natural language are now written in terms of hierarchical tree structure. Tree-based modeling still lacks many of the standard tools taken for granted in (finitestate) string-based modeling. The theory of tree transducer automata provides a possible framework to draw on, as it has been worked out in an extensive literature. We motivate the use of tree transducers for natural language and address the training problem for probabilistic tree-totree and tree-to-string transducers.",
"pdf_parse": {
"paper_id": "N04-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Many probabilistic models for natural language are now written in terms of hierarchical tree structure. Tree-based modeling still lacks many of the standard tools taken for granted in (finitestate) string-based modeling. The theory of tree transducer automata provides a possible framework to draw on, as it has been worked out in an extensive literature. We motivate the use of tree transducers for natural language and address the training problem for probabilistic tree-totree and tree-to-string transducers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Much of natural language work over the past decade has employed probabilistic finite-state transducers (FSTs) operating on strings. This has occurred somewhat under the influence of speech recognition, where transducing acoustic sequences to word sequences is neatly captured by left-to-right stateful substitution. Many conceptual tools exist, such as Viterbi decoding (Viterbi, 1967) and forward-backward training (Baum and Eagon, 1967) , as well as generic software toolkits. Moreover, a surprising variety of problems are attackable with FSTs, from partof-speech tagging to letter-to-sound conversion to name transliteration.",
"cite_spans": [
{
"start": 370,
"end": 385,
"text": "(Viterbi, 1967)",
"ref_id": "BIBREF25"
},
{
"start": 416,
"end": 438,
"text": "(Baum and Eagon, 1967)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, language problems like machine translation break this mold, because they involve massive reordering of symbols, and because the transformation processes seem sensitive to hierarchical tree structure. Recently, specific probabilistic tree-based models have been proposed not only for machine translation (Wu, 1997; Alshawi, Bangalore, and Douglas, 2000; Yamada and Knight, 2001; Gildea, 2003; Eisner, 2003) , but also for This work was supported by DARPA contract F49620-00-1-0337 and ARDA contract MDA904-02-C-0450.",
"cite_spans": [
{
"start": 312,
"end": 322,
"text": "(Wu, 1997;",
"ref_id": "BIBREF26"
},
{
"start": 323,
"end": 361,
"text": "Alshawi, Bangalore, and Douglas, 2000;",
"ref_id": "BIBREF1"
},
{
"start": 362,
"end": 386,
"text": "Yamada and Knight, 2001;",
"ref_id": "BIBREF27"
},
{
"start": 387,
"end": 400,
"text": "Gildea, 2003;",
"ref_id": null
},
{
"start": 401,
"end": 414,
"text": "Eisner, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "summarization (Knight and Marcu, 2002) , paraphrasing (Pang, Knight, and Marcu, 2003) , natural language generation (Langkilde and Knight, 1998; Bangalore and Rambow, 2000; Corston-Oliver et al., 2002) , and language modeling (Baker, 1979; Lari and Young, 1990; Collins, 1997; Chelba and Jelinek, 2000; Charniak, 2001 ; Klein and Manning, 2003) . It is useful to understand generic algorithms that may support all these tasks and more. (Rounds, 1970) and (Thatcher, 1970) independently introduced tree transducers as a generalization of FSTs. Rounds was motivated by natural language. The Rounds tree transducer is very similar to a left-to-right FST, except that it works top-down, pursuing subtrees in parallel, with each subtree transformed depending only on its own passed-down state. This class of transducer is often nowadays called R, for \"Root-to-frontier\" (G\u00e9cseg and Steinby, 1984) .",
"cite_spans": [
{
"start": 14,
"end": 38,
"text": "(Knight and Marcu, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 54,
"end": 85,
"text": "(Pang, Knight, and Marcu, 2003)",
"ref_id": "BIBREF21"
},
{
"start": 116,
"end": 144,
"text": "(Langkilde and Knight, 1998;",
"ref_id": "BIBREF19"
},
{
"start": 145,
"end": 172,
"text": "Bangalore and Rambow, 2000;",
"ref_id": "BIBREF3"
},
{
"start": 173,
"end": 201,
"text": "Corston-Oliver et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 226,
"end": 239,
"text": "(Baker, 1979;",
"ref_id": "BIBREF2"
},
{
"start": 240,
"end": 261,
"text": "Lari and Young, 1990;",
"ref_id": "BIBREF20"
},
{
"start": 262,
"end": 276,
"text": "Collins, 1997;",
"ref_id": "BIBREF7"
},
{
"start": 277,
"end": 302,
"text": "Chelba and Jelinek, 2000;",
"ref_id": "BIBREF6"
},
{
"start": 303,
"end": 317,
"text": "Charniak, 2001",
"ref_id": "BIBREF5"
},
{
"start": 320,
"end": 344,
"text": "Klein and Manning, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 436,
"end": 450,
"text": "(Rounds, 1970)",
"ref_id": "BIBREF22"
},
{
"start": 455,
"end": 471,
"text": "(Thatcher, 1970)",
"ref_id": "BIBREF24"
},
{
"start": 865,
"end": 891,
"text": "(G\u00e9cseg and Steinby, 1984)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rounds uses a mathematics-oriented example of an R transducer, which we summarize in Figure 1 . At each point in the top-down traversal, the transducer chooses a production to apply, based only on the current state and the current root symbol. The traversal continues until there are no more state-annotated nodes. Nondeterministic transducers may have several productions with the same left-hand side, and therefore some free choices to make during transduction.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An R transducer compactly represents a potentiallyinfinite set of input/output tree pairs: exactly those pairs (T1, T2) for which some sequence of productions applied to T1 (starting in the initial state) results in T2. This is similar to an FST, which compactly represents a set of input/output string pairs, and in fact, R is a generalization of FST. If we think of strings written down vertically, as degenerate trees, we can convert any FST into an R transducer by automatically replacing FST transitions with R productions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "R does have some extra power beyond path following and state-based record keeping. It can copy whole subtrees, and transform those subtrees differently. It can also delete subtrees without inspecting them (imagine by analogy an FST that quits and accepts right in the middle of an input string). Variants of R that disallow copying and deleting are called RL (for linear) and RN (for nondeleting), respectively. One advantage of working with tree transducers is the large and useful body of literature about these automata; two excellent surveys are (G\u00e9cseg and Steinby, 1984) and (Comon et al., 1997) . For example, R is not closed under composition (Rounds, 1970) , and neither are RL or F (the \"frontier-to-root\" cousin of R), but the non-copying FL is closed under composition. Many of these composition results are first found in (Engelfriet, 1975) .",
"cite_spans": [
{
"start": 550,
"end": 576,
"text": "(G\u00e9cseg and Steinby, 1984)",
"ref_id": "BIBREF13"
},
{
"start": 581,
"end": 601,
"text": "(Comon et al., 1997)",
"ref_id": "BIBREF8"
},
{
"start": 651,
"end": 665,
"text": "(Rounds, 1970)",
"ref_id": "BIBREF22"
},
{
"start": 835,
"end": 853,
"text": "(Engelfriet, 1975)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "R has surprising ability to change the structure of an input tree. For example, it may not be initially obvious how an R transducer can transform the English structure S(PRO, VP(V, NP)) into the Arabic equivalent S(V, PRO, NP), as it is difficult to move the subject PRO into position between the verb V and the direct object NP. First, R productions have no lookahead capability-the left-handside of the S production consists only of q S(x0, x1), although we want our English-to-Arabic transformation to apply only when it faces the entire structure q S(PRO, VP(V, NP)). However, we can simulate lookahead using states, as in these productions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-q S(x0, x1) \u2192 S(qpro x0, qvp.v.np x1) -qpro PRO \u2192 PRO -qvp.v.np VP(x0, x1) \u2192 VP(qv x0, qnp x1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By omitting rules like qpro NP \u2192 ..., we ensure that the entire production sequence will dead-end unless the first child of the input tree is in fact PRO. So finite lookahead is not a problem. The next problem is how to get the PRO to appear in between the V and NP, as in Arabic. This can be carried out using copying. We make two copies of the English VP, and assign them different states:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "-q S(x0,x1) \u2192 S(qleft.vp.v x1, qpro x0, qright.vp.np x1) -qpro PRO \u2192 PRO -qleft.vp.v VP(x0, x1) \u2192 qv x0 -qright.vp.np VP(x0, x1) \u2192 qnp x1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While general properties of R are understood, there are many algorithmic questions. In this paper, we take on the problem of training probabilistic R transducers. For many language problems (machine translation, paraphrasing, text compression, etc.), it is possible to collect training data in the form of tree pairs and to distill linguistic knowledge automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our problem statement is: Given (1) a particular transducer with productions P, and (2) a finite training set of sample input/output tree pairs, we want to produce (3) a probability estimate for each production in P such that we maximize the probability of the output trees given the input trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As organized in the rest of this paper, we accomplish this by intersecting the given transducer with each input/output pair in turn. Each such intersection produces a set of weighted derivations that are packed into a regular tree grammar (Sections 3-5), which is equivalent to a tree substitution grammar. The inside and outside probabilities of this packed derivation structure are used to compute expected counts of the productions from the original, given transducer (Sections 6-7). Section 9 gives a sample transducer implementing a published machine translation model; some readers may wish to skip to this section directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "T \u03a3 is the set of (rooted, ordered, labeled, finite) trees over alphabet \u03a3. An alphabet is just a finite set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "T \u03a3 (X) are the trees over alphabet \u03a3, indexed by Xthe subset of T \u03a3\u222aX where only leaves may be labeled by X. (T \u03a3 (\u2205) = T \u03a3 .) Leaves are nodes with no children.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "The nodes of a tree t are identified one-to-one with its paths:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "paths t \u2282 paths \u2261 N * \u2261 \u221e i=0 N i (A 0 \u2261 {()}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "The path to the root is the empty sequence (), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "p 1 extended by p 2 is p 1 \u2022 p 2 , where \u2022 is concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "For p \u2208 paths t , rank t (p) is the number of children, or rank, of the node at p in t, and label t (p) \u2208 \u03a3 \u222a X is its label. The ranked label of a node is the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "pair labelandrank t (p) \u2261 (label t (p), rank t (p)). For 1 \u2264 i \u2264 rank t (p), the i th child of the node at p is located at path p \u2022 (i). The subtree at path p of t is t \u2193 p, defined by paths t\u2193p \u2261 {q | p \u2022 q \u2208 paths t } and labelandrank t\u2193p (q) \u2261 labelandrank t (p \u2022 q).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "The paths to X in t are paths t (X) \u2261 {p \u2208 paths t | label t (p) \u2208 X}. A frontier is a set of paths f that are pairwise prefix-independent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "\u2200p 1 , p 2 \u2208 f, p \u2208 paths : p 1 = p 2 \u2022 p =\u21d2 p 1 = p 2 A frontier of t is a frontier f \u2286 paths t . For t, s \u2208 T \u03a3 (X), p \u2208 paths t , t[p \u2190 s] is the substitu- tion of s for p in t,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "where the subtree at path p is replaced by s. For a frontier f of t, the mass substitution of X for the frontier f in t is written t[p \u2190 X, \u2200p \u2208 f ] and is equivalent to substituting the X(p) for the p serially in any order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "Trees may be written as strings over \u03a3 \u222a {(, )} in the usual way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "For example, the tree t = S(NP, VP(V, NP)) has labelandrank t ((2)) = (VP, 2) and labelandrank t ((2, 1)) = (V, 0). For t \u2208 T \u03a3 , \u03c3 \u2208 \u03a3, \u03c3(t) is the tree whose root has label \u03c3 and whose single child is t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "The yield of X in t is yield t (X), the string formed by reading out the leaves labeled with X in left-to-right order. The usual case (the yield of t) is yield t \u2261 yield t (\u03a3). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "\u03a3 = {S, NP, VP, PP, PREP, DET, N, V, run, the, of, sons, daughters} N = {qnp, qpp, qdet, qn, qprep} S = q P = {q \u2192 1.0 S(qnp, VP(V(run))), qnp \u2192 0.6 NP(qdet, qn), qnp \u2192 0.4 NP(qnp, qpp), qpp \u2192 1.0 PP(qprep, qnp), qdet \u2192 1.0 DET(the), qprep \u2192 1.0 PREP(of), qn \u2192 0.5 N(sons), qn \u2192 0.5 N(daughters)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trees",
"sec_num": "2"
},
{
"text": "In this section, we describe the regular tree grammar, a common way of compactly representing a potentially infinite set of trees (similar to the role played by the finitestate acceptor FSA for strings). We describe the version (equivalent to TSG (Schabes, 1990) ) where the generated trees are given weights, as are strings in a WFSA.",
"cite_spans": [
{
"start": 247,
"end": 262,
"text": "(Schabes, 1990)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "A weighted regular tree grammar (wRTG) G is a quadruple (\u03a3, N, S, P ), where \u03a3 is the alphabet, N is the finite set of nonterminals, S \u2208 N is the start (or initial) nonterminal, and P \u2286 N \u00d7 T \u03a3 (N )\u00d7 R + is the finite set of weighted productions (R + \u2261 {r \u2208 R | r > 0}). A production (lhs, rhs, w) is written lhs \u2192 w rhs. Productions whose rhs contains no nonterminals (rhs \u2208 T \u03a3 ) are called terminal productions, and rules of the form A \u2192 w B, for A, B \u2208 N are called \u01eb-productions, or epsilon productions, and can be used in lieu of multiple initial nonterminals. Figure 2 shows a sample wRTG. This grammar accepts an infinite number of trees. The tree S(NP(DT(the), N(sons)), VP(V(run))) comes out with probability 0.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 567,
"end": 575,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "We define the binary relation \u21d2 G (single-step derives in G) on T \u03a3 (N )\u00d7(paths\u00d7P ) * , pairs of trees and derivation histories, which are logs of (location, production used):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "\u21d2 G \u2261 ((a, h), (b, h \u2022 (p, (l, r, w))) (l, r, w) \u2208 P \u2227 p \u2208 paths a ({l}) \u2227 b = a[p \u2190 r]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "where (a, h) \u21d2 G (b, h \u2022 (p, (l, r, w))) iff tree b may be derived from tree a by using the rule l \u2192 w r to replace the nonterminal leaf l at path p with r. For a derivation history",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "h = ((p 1 , (l 1 , r 1 , w 1 )), . . . , (p n , (l 1 , r 1 , w 1 ))), the weight of h is w(h) \u2261 n i=1 w i , and call h leftmost if L(h) \u2261 \u22001 \u2264 i < n : p i+1 \u226e lex p i . 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "The reflexive, transitive closure of \u21d2 G is written \u21d2 * G (derives in G), and the restriction of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "\u21d2 * G to leftmost derivation histories is \u21d2 L * G (leftmost derives in G). The weight of a becoming b in G is w G (a, b) \u2261 h:(a,())\u21d2 L * G (b,h) w(h)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": ", the sum of weights of all unique (leftmost) derivations transforming a to b, and the weight of t in G is W G (t) = w G (S, t). The weighted regular tree language produced by G is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "L G \u2261 {(t, w) \u2208 T \u03a3 \u00d7 R + | W G (t) = w}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "For every weighted context-free grammar, there is an equivalent wRTG that produces its weighted derivation trees with yields being the string produced, and the yields of regular tree grammars are context free string languages (G\u00e9cseg and Steinby, 1984) .",
"cite_spans": [
{
"start": 226,
"end": 252,
"text": "(G\u00e9cseg and Steinby, 1984)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "What is sometimes called a forest in natural language generation (Langkilde, 2000; Nederhof and Satta, 2002) is a finite wRTG without loops, i.e., \u2200n \u2208 N (n, ()) \u21d2 * G (t, h) =\u21d2 paths t ({n}) = \u2205. Regular tree languages are strictly contained in tree sets of tree adjoining grammars (Joshi and Schabes, 1997) .",
"cite_spans": [
{
"start": 65,
"end": 82,
"text": "(Langkilde, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 83,
"end": 108,
"text": "Nederhof and Satta, 2002)",
"ref_id": null
},
{
"start": 283,
"end": 308,
"text": "(Joshi and Schabes, 1997)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regular Tree Grammars",
"sec_num": "3"
},
{
"text": "Section 1 informally described the root-to-frontier transducer class R. We saw that R allows, by use of states, finite lookahead and arbitrary rearrangement of nonsibling input subtrees removed by a finite distance. However, it is often easier to write rules that explicitly represent such lookahead and movement, relieving the burden on the user to produce the requisite intermediary rules and states. We define xR, a convenience-oriented generalization of weighted R. Because of its good fit to natural language problems, xR is already briefly touched on, though not defined, in (Rounds, 1970) .",
"cite_spans": [
{
"start": 581,
"end": 595,
"text": "(Rounds, 1970)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "A weighted extended-lhs root-to-frontier tree transducer X is a quintuple (\u03a3, \u2206, Q, Q i , R) where \u03a3 is the input alphabet, and \u2206 is the output alphabet, Q is a finite set of states, Q i \u2208 Q is the initial (or start, or root) state, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "R \u2286 Q \u00d7 XRPAT \u03a3 \u00d7 T \u2206 (Q \u00d7 paths) \u00d7 R +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "is a finite set of weighted transformation rules, written (q, pattern) \u2192 w rhs, meaning that an input subtree matching pattern while in state q is transformed into rhs, with Q \u00d7 paths leaves replaced by their (recursive) transformations. The Q\u00d7paths leaves of a rhs are called nonterminals (there may also be terminal leaves labeled by the output tree alphabet \u2206).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "XRPAT \u03a3 is the set of finite tree patterns: predicate functions f : T \u03a3 \u2192 {0, 1} that depend only on the label and rank of a finite number of fixed paths their input. xR is the set of all such transducers. R, the set of conventional top-down transducers, is a subset of xR where the rules are restricted to use finite tree patterns that depend only on the root:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "RPAT \u03a3 \u2261 {p \u03c3,r (t)} where p \u03c3,r (t) \u2261 (label t (()) = \u03c3 \u2227 rank t (()) = r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "Rules whose rhs are a pure T \u2206 with no states/paths for further expansion are called terminal rules. Rules of the form (q, pat) \u2192 w (q \u2032 , ()) are \u01eb-rules, or epsilon rules, which substitute state q \u2032 for state q without producing output, and stay at the current input subtree. Multiple initial states are not needed: we can use a single start state Q i , and instead of each initial state q with starting weight w add the rule (Q i , TRUE) \u2192 w (q, ()) (where TRUE(t) \u2261 1, \u2200t).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "We define the binary relation \u21d2 X for xR tranducer X on T \u03a3\u222a\u2206\u222aQ \u00d7(paths\u00d7R) * , pairs of partially transformed (working) trees and derivation histories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "\u21d2 X \u2261 ((a, h), (b, h \u2022 (i, (q, pat, r, w)))) (q, pat, r, w) \u2208 R \u2227 i \u2208 paths a \u2227 q = label a (i) \u2227 pat(a \u2193 (i \u2022 (1))) = 1 \u2227 b = a i \u2190 r p \u2190 q \u2032 (a \u2193 (i \u2022 (1) \u2022 i \u2032 )), \u2200p : label r (p) = (q \u2032 , i \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "That is, b is derived from a by application of a rule (q, pat) \u2192 w r to an unprocessed input subtree a \u2193 i which is in state q, replacing it by output given by r, with its nonterminals replaced by the instruction to transform descendant input subtrees at relative path i \u2032 in state q \u2032 . The sources of a rule r = (q, l, rhs, w) \u2208 R are the inputpath parts of the rhs nonterminals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "sources(rhs) \u2261\u02d8i \u2032\u02db\u2203 p \u2208 paths rhs (Q \u00d7 paths), q \u2032 \u2208 Q : label rhs (p) = (q \u2032 , i \u2032 )\u012a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "f the sources of a rule refer to input paths that do not exist in the input, then the rule cannot apply (because a \u2193 (i \u2022 (1) \u2022 i \u2032 ) would not exist). In the traditional statement of R, sources(rhs) is always {(1), . . . , (n)}, writing x i instead of (i), but in xR, we identify mapped input subtrees by arbitrary (finite) paths.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "An input tree is transformed by starting at the root in the initial state, and recursively applying outputgenerating rules to a frontier of (copies of) input subtrees (each marked with their own state), until (in a complete derivation, finishing at the leaves with terminal rules) no states remain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "Let \u21d2 * X , \u21d2 L * X , and w X (a, b) follow from \u21d2 X exactly as in Section 3. Then the weight of (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "i, o) in X is W X (i, o) \u2261 w X (Q i (i), o). The weighted tree trans- duction given by X is X X \u2261 {(i, o, w) \u2208 T \u03a3 \u00d7 T \u2206 \u00d7 R + |W X (i, o) = w}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extended-LHS Tree Transducers (xR)",
"sec_num": "4"
},
{
"text": "Derivation trees for a transducer X = (\u03a3, \u2206, Q, Q i , R) are trees labeled by rules (R) that dictate the choice of rules in a complete X-derivation. Figure 3 shows derivation trees for a particular transducer. In order to generate derivation trees for X automatically, we build a modified transducer X \u2032 . This new transducer produces derivation trees on its output instead of normal output trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Parsing a Tree Transduction",
"sec_num": "5"
},
{
"text": "X \u2032 is (\u03a3, R, Q, Q i , R \u2032 ), with R \u2032 \u2261 {(q, pattern, rule(yield rhs (Q \u00d7 paths)), w) | rule = (q, pattern, rhs, w) \u2208 R}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Tree Transduction",
"sec_num": "5"
},
{
"text": "That is, the original rhs of rules are flattened into a tree of depth 1, with the root labeled by the original rule, and all the non-expanding \u2206-labeled nodes of the rhs removed, so that the remaining children are the nonterminal yield in left to right order. Derivation trees deterministically produce a single weighted output tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Tree Transduction",
"sec_num": "5"
},
{
"text": "The derived transducer X \u2032 nicely produces derivation trees for a given input, but in explaining an observed (input/output) pair, we must restrict the possibilities further. Because the transformations of an input subtree depend only on that subtree and its state, we can (Algorithm 1) build a compact wRTG that produces exactly the weighted derivation trees corresponding to Xtransductions (I, ()) \u21d2 * X (O, h) (with weight equal to w X (h)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing a Tree Transduction",
"sec_num": "5"
},
{
"text": "Given a wRTG G = (\u03a3, N, S, P ), we can compute the sums of weights of trees derived using each production by adapting the well-known inside-outside algorithm for weighted context-free (string) grammars (Lari and Young, 1990) .",
"cite_spans": [
{
"start": 202,
"end": 224,
"text": "(Lari and Young, 1990)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "The inside weights using G are given by \u03b2 G : T \u03a3 \u2192 (R\u2212R \u2212 ), giving the sum of weights of all tree-producing derivatons from trees with nonterminal leaves:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "\u03b2 G (t) \u2261 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 (t,r,w)\u2208P w \u2022 \u03b2 G (r) if t \u2208 N p\u2208pathst(N ) \u03b2 G (label t (p)) otherwise By definition, \u03b2 G (S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "gives the sum of the weights of all trees generated by G. For the wRTG generated by DERIV(X, I, O), this is exactly W X (I, O).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "Outside weights \u03b1 G for a nonterminal are the sums of weights of trees generated by the wRTG that have derivations containing it, but excluding its inside weights (that is, the weights summed do not include the weights of rules used to expand an instance of it). \u03b1G(n \u2208 N ) \u2261 1 if n = S, else: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "uses of n in productions z }| { X p,(n \u2032 ,r,w)\u2208P :labelr (p)=n w \u2022 \u03b1G(n \u2032 ) \u2022 Y p \u2032 \u2208pathsr(N)\u2212{p} \u03b2G(labelr(p \u2032 )) | {z } sibling nonterminals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "\u2193 i) = 1 \u2227 MATCH O,\u2206 (rhs, o) do (o 1 , . . . , o n ) \u2190 paths rhs (Q \u00d7 paths) sorted by o 1 < lex . . . < lex o n //n = 0 if there are none labelandrank derivrhs (()) \u2190 (r, n) for j \u2190 1 to n do (q \u2032 , i \u2032 ) \u2190 label rhs (o j ) c \u2190 (q \u2032 , i \u2022 i \u2032 , o \u2022 o i ) if \u00acPRODUCE I,O (c) then next r labelandrank derivrhs ((j)) \u2190 (c, 0) anyrule? \u2190 true P \u2190 P \u222a {((q, i, o), derivrhs, w)} if anyrule? then N \u2190 N \u222a {(q, i, o)} return anyrule? end MATCH t,\u03a3 (t \u2032 , p) \u2261 \u2200p \u2032 \u2208 path(t \u2032 ) : label(t \u2032 , p \u2032 ) \u2208 \u03a3 =\u21d2 labelandrank t \u2032 (p \u2032 ) = labelandrank t (p \u2022 p \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "The possible derivations for a given PRODUCE I,O (q, i, o) are constant and need not be computed more than once, so the function is memoized. We have in the worst case to visit all |Q| \u2022 |I| \u2022 |O| (q, i, o) pairs and have all |R| transducer rules match at each of them. If enumerating rules matching transducer input-patterns and output-subtrees has cost L (constant given a transducer), then DERIV has time complexity O(L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "\u2022 |Q| \u2022 |I| \u2022 |O| \u2022 |R|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "Finally, given inside and outside weights, the sum of weights of trees using a particular production is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "\u03b3 G ((n, r, w) \u2208 P ) \u2261 \u03b1 G (n) \u2022 w \u2022 \u03b2 G (r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "Computing \u03b1 G and \u03b2 G for nonrecursive wRTG is a straightforward translation of the above recursive definitions (using memoization to compute each result only once) and is O(|G|) in time and space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inside-Outside for wRTG",
"sec_num": "6"
},
{
"text": "Estimation-Maximization training (Dempster, Laird, and Rubin, 1977) works on the principle that the corpus likelihood can be maximized subject to some normalization constraint on the parameters by repeatedly (1) estimating the expectation of decisions taken for all possible ways of generating the training corpus given the current parameters, accumulating parameter counts, and (2) maximizing by assigning the counts to the parameters and renormalizing. Each iteration is guaranteed to increase the likelihood until a local maximum is reached.",
"cite_spans": [
{
"start": 33,
"end": 67,
"text": "(Dempster, Laird, and Rubin, 1977)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EM Training",
"sec_num": "7"
},
{
"text": "Algorithm 2 implements EM xR training, repeatedly computing inside-outside weights (using fixed transducer derivation wRTGs for each input/output tree pair) to efficiently sum each parameter contribution to likelihood over all derivations. Each EM iteration takes time linear in the size of the transducer and linear in the size of the derivation tree grammars for the training examples ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EM Training",
"sec_num": "7"
},
{
"text": "We now turn to tree-to-string transducers (xRS). In the automata literature, these were first called generalized syntax-directed translations (Aho and Ullman, 1971 ) and used to specify compilers. Tree-to-string transducers have also been applied to machine translation (Yamada and Knight, 2001; Eisner, 2003) .",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Aho and Ullman, 1971",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 295,
"text": "(Yamada and Knight, 2001;",
"ref_id": "BIBREF27"
},
{
"start": 296,
"end": 309,
"text": "Eisner, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "We give an explicit tree-to-string transducer example in the next section. Formally, a weighted extended-lhs root-to-frontier tree-to-string transducer X is a quintuple (\u03a3, \u2206, Q, Q i , R) where \u03a3 is the input alphabet, and \u2206 is the output alphabet, Q is a finite set of states, Q i \u2208 Q is the initial (or start, or root) state, and R \u2286 Q \u00d7 XRP AT \u03a3 \u00d7 (\u2206 \u222a (Q \u00d7 paths)) \u22c6 \u00d7 R + are a finite set of weighted transformation rules, written (q, pattern) \u2192 w rhs. A rule says that to transform (with weight w) an input subtree matching pattern while in state q, replace it by the string of rhs with its nonterminal (Q \u00d7 paths) letters replaced by their (recursive) transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "xRS is the same as xR, except that the rhs are strings containing some nonterminals instead of trees containing nonterminal leaves (so the intermediate derivation objects weighted tree pairs T \u2208 T \u03a3 \u00d7 T \u2206 \u00d7 R + , normalization function Z({count r | r \u2208 R}, r \u2032 \u2208 R), minimum relative log-likelihood change for convergence \u01eb \u2208 R + , maximum number of iterations maxit \u2208 N, and prior counts (for a so-called Dirichlet prior) {prior r | r \u2208 R} for smoothing each rule. Output: New rule weights W \u2261 {w r | r \u2208 R}. begin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "for (i, o, w) \u2208 T do d i,o \u2190 DERIV(X, i, o)//Alg. 1 if d i,o = f alse then T \u2190 T \u2212 {(i, o, w)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "warn(more rules are needed to explain (i,o)) compute inside/outside weights for d i,o and remove all useless nonterminals n whose",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "\u03b2 di,o (n) = 0 or \u03b1 di,o (n) = 0 itno \u2190 0, lastL \u2190 \u2212\u221e, \u03b4 \u2190 \u01eb for r = (q, pat, rhs, w) \u2208 R do w r \u2190 w while \u03b4 \u2265 \u01eb \u2227 itno < maxit do for r \u2208 R do count r \u2190 prior r L \u2190 0 for (i, o, w example ) \u2208 T //Estimate do let D \u2261 d i,o \u2261 (R, N, S, P ) compute \u03b1 D , \u03b2 D using latest W \u2261 {w r | r \u2208 R} //see Section 6 for prod = (n, rhs, w) \u2208 P do \u03b3 D (prod) \u2190 \u03b1 D (n) \u2022 w \u2022 \u03b2 D (rhs) let rule \u2261 label rhs (()) count rule \u2190 count rule +w example \u2022 \u03b3D (prod) \u03b2D(S) L \u2190 L + log \u03b2 D (S) \u2022 w example for r = (q, pattern, rhs, w) \u2208 R //Maximize do w r \u2190 count r Z({count r |r \u2208 R}, r) //e.g.Z((q, a, b, c)) \u2261 r=(q,d,e,f )\u2208R count r \u03b4 \u2190 L \u2212 lastL |L| lastL \u2190 L, itno \u2190 itno + 1 end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "are strings containing state-marked input subtrees). We have developed an xRS training procedure similar to the xR procedure, with extra computational expense to consider how different productions might map to different spans of the output string. Space limitations prohibit a detailed description; we refer the reader to a longer version of this paper (submitted). We note that this algorithm subsumes normal inside-outside training of PCFG on strings (Lari and Young, 1990 ), since we can always fix the input tree to some constant for all training examples.",
"cite_spans": [
{
"start": 453,
"end": 474,
"text": "(Lari and Young, 1990",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-String Transducers (xRS)",
"sec_num": "8"
},
{
"text": "It is possible to cast many current probabilistic natural language models as R-type tree transducers. In this section, we implement the translation model of (Yamada and Knight, 2001 ). Their generative model provides a formula for P(Japanese string | English tree), in terms of individual parameters, and their appendix gives special EM re-estimation formulae for maximizing the product of these conditional probabilities across the whole tree/string corpus.",
"cite_spans": [
{
"start": 157,
"end": 181,
"text": "(Yamada and Knight, 2001",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "We now build a trainable xRS tree-to-string transducer that embodies the same P(Japanese string | English tree). First, we need start productions like these, where q is the start state:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "-q x:S \u2192 q.TOP.S x -q x:VP \u2192 q.TOP.VP x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "These set up states like q.TOP.S, which means \"translate this tree, whose root is S.\" Then every q.parent.child pair gets its own set of three insert-function-word productions, e.g.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "-q.TOP.S x \u2192 i x, r x -q.TOP.S x \u2192 r x, i x -q.TOP.S x \u2192 r x -q.NP.NN x \u2192 i x, r x -q.NP.NN x \u2192 r x, i x -q.NP.NN x \u2192 r x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "State i means \"produce a Japanese function word out of thin air.\" We include an i production for every Japanese word in the vocabulary, e.g.:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "-i x \u2192 de -i x \u2192 kuruma -i x \u2192 wa",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "State r means \"re-order my children and then recurse.\" For internal nodes, we include a production for every parent/child-sequence and every permutation thereof, e.g.: The rhs sends the child subtrees back to state q for recursive processing. However, for English leaf nodes, we instead transition to a different state t, so as to prohibit any subsequent Japanese function word insertion: State t means \"translate this word,\" and we have a production for every pair of co-occurring English and Japanese words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "-t car \u2192 kuruma -t car \u2192 wa -t car \u2192 *e* This follows (Yamada and Knight, 2001) in also allowing English words to disappear, or translate to epsilon.",
"cite_spans": [
{
"start": 54,
"end": 79,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "Every production in the xRS transducer has an associated weight and corresponds to exactly one of the model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "There are several benefits to this xRS formulation. First, it clarifies the model, in the same way that (Knight and Al-Onaizan, 1998; Kumar and Byrne, 2003) elucidate other machine translation models in easily-grasped FST terms. Second, the model can be trained with generic, off-the-shelf tools-versus the alternative of working out model-specific re-estimation formulae and implementing custom training software. Third, we can easily extend the model in interesting ways. For example, we can add productions for multi-level and lexical re-ordering: We can also eliminate many epsilon word-translation rules in favor of more syntactically-controlled ones, e.g.:",
"cite_spans": [
{
"start": 104,
"end": 133,
"text": "(Knight and Al-Onaizan, 1998;",
"ref_id": "BIBREF16"
},
{
"start": 134,
"end": 156,
"text": "Kumar and Byrne, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "-r NP(DT(the),x0:NN) \u2192 q x0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "We can make many such changes without modifying the training procedure, as long as we stick to tree automata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": "9"
},
{
"text": "Tree substitution grammars or TSG (Schabes, 1990) are equivalent to regular tree grammars. xR transducers are similar to (weighted) Synchronous TSG, except that xR can copy input trees (and transform the copies differently), but does not model deleted input subtrees. (Eisner, 2003) discusses training for Synchronous TSG. Our training algorithm is a generalization of forwardbackward EM training for finite-state (string) transducers, which is in turn a generalization of the original forwardbackward algorithm for Hidden Markov Models.",
"cite_spans": [
{
"start": 34,
"end": 49,
"text": "(Schabes, 1990)",
"ref_id": "BIBREF23"
},
{
"start": 268,
"end": 282,
"text": "(Eisner, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "10"
},
{
"text": "() < lex (a), (a1) < lex (a2) iff a1 < a2, (a1) \u2022 b1 < lex (a2) \u2022 b2 iff a1 < a2 \u2228 (a1 = a2 \u2227 b1 < lex b2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to Daniel Gildea and Kenji Yamada for comments on a draft of this paper, and to David McAllester for helping us connect into previous work in automata theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "11"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Translations of a context-free grammar",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1971,
"venue": "Information and Control",
"volume": "19",
"issue": "",
"pages": "439--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, A. V. and J. D. Ullman. 1971. Translations of a context-free grammar. Information and Control, 19:439-475.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning dependency translation models as collections of finite state head transducers",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Shona",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "1",
"pages": "45--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alshawi, Hiyan, Srinivas Bangalore, and Shona Douglas. 2000. Learning de- pendency translation models as collections of finite state head transducers. Computational Linguistics, 26(1):45-60.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "J",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "Speech Communication Papers for the 97th Meeting of the",
"volume": "",
"issue": "",
"pages": "547--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, J. K. 1979. Trainable grammars for speech recognition. In D. Klatt and J. Wolf, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America. Boston, MA, pages 547-550.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Exploiting a probabilistic hierarchical model for generation",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bangalore, Srinivas and Owen Rambow. 2000. Exploiting a probabilistic hierar- chical model for generation. In Proc. COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An inequality with application to statistical estimation for probabilistic functions of Markov processes and to a model for ecology",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Eagon",
"suffix": ""
}
],
"year": 1967,
"venue": "Bulletin of the American Mathematicians Society",
"volume": "73",
"issue": "",
"pages": "360--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baum, L. E. and J. A. Eagon. 1967. An inequality with application to statistical estimation for probabilistic functions of Markov processes and to a model for ecology. Bulletin of the American Mathematicians Society, 73:360-363.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Immediate-head parsing for language models",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, Eugene. 2001. Immediate-head parsing for language models. In Proc. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Structured language modeling",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "4",
"pages": "283--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba, C. and F. Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14(4):283-332.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1997. Three generative, lexicalised models for statistical pars- ing. In Proc. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tree automata techniques and applications. Available on www.grappa.univ-lille3.fr/tata. release October",
"authors": [
{
"first": "H",
"middle": [],
"last": "Comon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dauchet",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gilleron",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jacquemard",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lugiez",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tison",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tommasi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Comon, H., M. Dauchet, R. Gilleron, F. Jacquemard, D. Lugiez, S. Tison, and M. Tommasi. 1997. Tree automata techniques and applications. Available on www.grappa.univ-lille3.fr/tata. release October, 1st 2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An overview of Amalgam, a machine-learned generation module",
"authors": [
{
"first": "",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"K"
],
"last": "Gamon",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ringger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. IWNLG",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corston-Oliver, Simon, Michael Gamon, Eric K. Ringger, and Robert Moore. 2002. An overview of Amalgam, a machine-learned generation module. In Proc. IWNLG.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society, Series B",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A. P., N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning non-isomorphic tree mappings for machine translation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL (companion volume)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisner, Jason. 2003. Learning non-isomorphic tree mappings for machine trans- lation. In Proc. ACL (companion volume).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bottom-up and top-down tree transformations-a comparison",
"authors": [
{
"first": "J",
"middle": [],
"last": "Engelfriet",
"suffix": ""
}
],
"year": 1975,
"venue": "Math. Systems Theory",
"volume": "9",
"issue": "3",
"pages": "198--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Engelfriet, J. 1975. Bottom-up and top-down tree transformations-a compari- son. Math. Systems Theory, 9(3):198-231.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Loosely tree-based alignment for machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "G\u00e9cseg",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steinby",
"suffix": ""
}
],
"year": 1984,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G\u00e9cseg, F. and M. Steinby. 1984. Tree Automata. Akad\u00e9miai Kiad\u00f3, Budapest. Gildea, Daniel. 2003. Loosely tree-based alignment for machine translation. In Proc. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Tree-adjoining grammars",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1997,
"venue": "Handbook of Formal Languages",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, A. and Y. Schabes. 1997. Tree-adjoining grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages (Vol. 3). Springer, NY.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, Dan and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proc. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Translation with finite-state devices",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight, K. and Y. Al-Onaizan. 1998. Translation with finite-state devices. In Proc. AMTA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Summarization beyond sentence extractiona probabilistic approach to sentence compression",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "Artificial Intelligence",
"volume": "139",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight, K. and D. Marcu. 2002. Summarization beyond sentence extraction- a probabilistic approach to sentence compression. Artificial Intelligence, 139(1).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A weighted finite state transducer implementation of the alignment template model for statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, S. and W. Byrne. 2003. A weighted finite state transducer implemen- tation of the alignment template model for statistical machine translation. In Proceedings of HLT-NAACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generation that exploits corpus-based statistical knowledge",
"authors": [
{
"first": "I",
"middle": [
";"
],
"last": "Langkilde",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. NAACL. Langkilde, I",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Langkilde, I. 2000. Forest-based statistical sentence generation. In Proc. NAACL. Langkilde, I. and K. Knight. 1998. Generation that exploits corpus-based statisti- cal knowledge. In Proc. ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language, 4. Nederhof, Mark-Jan and Giorgio Satta",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lari, K. and S. J. Young. 1990. The estimation of stochastic context-free gram- mars using the inside-outside algorithm. Computer Speech and Language, 4. Nederhof, Mark-Jan and Giorgio Satta. 2002. Parsing non-recursive CFGs. In Proc. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Syntax-based alignment of multiple translations extracting paraphrases and generating new sentences",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, Bo, Kevin Knight, and Daniel Marcu. 2003. Syntax-based alignment of multiple translations extracting paraphrases and generating new sentences. In Proc. HLT/NAACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mappings and grammars on trees",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Rounds",
"suffix": ""
}
],
"year": 1970,
"venue": "Mathematical Systems Theory",
"volume": "4",
"issue": "3",
"pages": "257--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rounds, William C. 1970. Mappings and grammars on trees. Mathematical Systems Theory, 4(3):257-287.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mathematical and Computational Aspects of Lexicalized Grammars",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schabes, Yves. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generalized 2 sequential machine maps",
"authors": [
{
"first": "J",
"middle": [
"W"
],
"last": "Thatcher",
"suffix": ""
}
],
"year": 1970,
"venue": "J. Comput. System Sci",
"volume": "4",
"issue": "",
"pages": "339--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thatcher, J. W. 1970. Generalized 2 sequential machine maps. J. Comput. System Sci., 4:339-367.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Trans. Information Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viterbi, A. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Information Theory, IT-13.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Dekai. 1997. Stochastic inversion transduction grammars and bilingual pars- ing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamada, Kenji and Kevin Knight. 2001. A syntax-based statistical translation model. In Proc. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A sample R tree transducer that takes the derivative of its input.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "A sample weighted regular tree grammar (wRTG)",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Derivation trees for an R tree transducer.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "xR transducer X = (\u03a3, \u2206, Q, Q i , R) and observed tree pairI \u2208 T \u03a3 , O \u2208 T \u2206 . Output: derivation wRTG G = (R, N \u2286 Q \u00d7 paths I \u00d7 paths O ,S, P ) generating all weighted derivation trees for X that produce O from I. Returns f alse instead if there are no such trees. begin S \u2190 (Q i , (), ()), N \u2190 \u2205, P \u2190 \u2205 if PRODUCE I,O (S) then return (R, N, S, P ) else return f alse end memoized PRODUCE I,O (q, i, o) returns boolean \u2261 begin anyrule? \u2190 f alse for r = (q, pattern, rhs, w) \u2208 R : pattern(I",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": ". The size of the derivation trees is at worst O(|Q|\u2022|I|\u2022|O|\u2022|R|). For a corpus of K examples with average input/output size M , an iteration takes (at worst) O(|Q| \u2022 |R| \u2022 K \u2022 M 2 ) time-quadratic, like the forward-backward algorithm.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "TRAIN Input: xR transducer X = (\u03a3, \u2206, Q, Q d , R), observed",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "-r NP(x0:CD, x1:NN) \u2192 q.NP.CD x0, q.NP.NN x1 -r NP(x0:CD, x1:NN) \u2192 q.NP.NN x1, q.NP.CD x0",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "-r NN(x0:car) \u2192 t x0 -r CC(x0:and) \u2192 t x0",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "-r NP(x0:NP, PP(IN(of), x1:NP)) \u2192 q x1, no, q x0We can add productions for phrasal translations:-r NP(JJ(big), NN(cars)) \u2192 ooki, kurumaThis can now include crucial non-constituent phrasal translations:-r S(NP(PRO(there),VP(VB(are), x0:NP) \u2192 q x0, ga, arimasu",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}