|
{ |
|
"paper_id": "N19-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:00:14.952385Z" |
|
}, |
|
"title": "Neural Finite State Transducers: Beyond Rational Relations", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Cheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University Baltimore", |
|
"location": { |
|
"postCode": "21218", |
|
"region": "MD", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tsinghua University", |
|
"location": { |
|
"settlement": "Beijing", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Gormley", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": { |
|
"postCode": "21218", |
|
"settlement": "Baltimore", |
|
"region": "MD", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We introduce neural finite state transducers (NFSTs), a family of string transduction models defining joint and conditional probability distributions over pairs of strings. The probability of a string pair is obtained by marginalizing over all its accepting paths in a finite state transducer. In contrast to ordinary weighted FSTs, however, each path is scored using an arbitrary function such as a recurrent neural network, which breaks the usual conditional independence assumption (Markov property). NFSTs are more powerful than previous finite-state models with neural features (Rastogi et al., 2016). We present training and inference algorithms for locally and globally normalized variants of NFSTs. In experiments on different transduction tasks, they compete favorably against seq2seq models while offering interpretable paths that correspond to hard monotonic alignments.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We introduce neural finite state transducers (NFSTs), a family of string transduction models defining joint and conditional probability distributions over pairs of strings. The probability of a string pair is obtained by marginalizing over all its accepting paths in a finite state transducer. In contrast to ordinary weighted FSTs, however, each path is scored using an arbitrary function such as a recurrent neural network, which breaks the usual conditional independence assumption (Markov property). NFSTs are more powerful than previous finite-state models with neural features (Rastogi et al., 2016). We present training and inference algorithms for locally and globally normalized variants of NFSTs. In experiments on different transduction tasks, they compete favorably against seq2seq models while offering interpretable paths that correspond to hard monotonic alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Weighted finite state transducers (WFSTs) have been used for decades to analyze, align, and transduce strings in language and speech processing (Roche and Schabes, 1997; Mohri et al., 2008) . They form a family of efficient, interpretable models with wellstudied theory. A WFST describes a function that maps each string pair (x, y) to a weight-often a real number representing p(x, y) or p(y | x). The WFST is a labeled graph, in which each path a represents a sequence of operations that describes how some x and some y could be jointly generated, or how x could be edited into y. Multiple paths for the same (x, y) pair correspond to different analyses (labeled alignments) of that pair.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 169, |
|
"text": "(Roche and Schabes, 1997;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 189, |
|
"text": "Mohri et al., 2008)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, WFSTs can only model certain functions, known as the rational relations (Berstel and Reutenauer, 1988) .The weight of a path is simply the product of the weights on its arcs. This means Figure 1 : A marked finite-state transducer T . Each arc in T is associated with input and output substrings, listed above the arcs in the figure. The arcs are not labeled with weights as in WFSTs. Rather, each arc is labeled with a sequence of marks (shown in brown) that featurize its qualities. The neural scoring model scores a path by scoring each mark in the context of all marks on the entire path. The example shown here is from the G2P application of \u00a74.1; for space, only a few arcs are shown. \u03b5 represents the empty string. that in a random path of the form a b c, the two subpaths are conditionally independent given their common state b: a Markov property.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 111, |
|
"text": "(Berstel and Reutenauer, 1988)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we propose neural finite state transducers (NFSTs), in which the weight of each path is instead given by some sort of neural network, such as an RNN. Thus, the weight of an arc can depend on the context in which the arc is used. By abandoning the Markov property, we lose exact dynamic programming algorithms, but we gain expressivity: the neural network can capture dependencies among the operations along a path. For example, the RNN might give higher weight to a path if it is \"internally consistent\": it might thus prefer to transcribe a speaker's utterance with a path that maps similar sounds in similar contexts to similar phonemes, thereby adapting to the speaker's accent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consider a finite-state transducer T as in Figure 1 (see Appendix A for background). Using the composition operator \u2022, we can obtain a new FST, x \u2022 T , whose accepting paths correspond to the accepting paths of T that have input string x. Similarly, the accepting paths of T \u2022 y correspond to the accepting paths of T that have output string y. Finally, x \u2022 T \u2022 y extracts the paths that have both properties. We define a joint probability distribution over (x, y) pairs by marginalizing over those paths:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 51, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "p(x, y) = a\u2208x\u2022T \u2022y p(a) = 1 Z(T ) a\u2208x\u2022T \u2022\u1ef9 p(a) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "wherep(a) is the weight of path a and Z(T ) = a\u2208Tp (a) is a normalization constant. We definep(a) exp G \u03b8 (a) with G \u03b8 (a) being some parametric scoring function. In our experiments, we will adopt a fairly simple left-to-right RNN architecture ( \u00a72.2), but one could easily substitute fancier architectures. We will also consider defining G \u03b8 by a locally normalized RNN that ensures Z(T ) = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In short, we use the finite-state transducer T to compactly define a set of possible paths a. The number of paths may be exponential in the size of T , or infinite if T is cyclic. However, in contrast to WFSTs, we abandon this combinatorial structure in favor of neural nets when defining the probability distribution over a. In the resulting marginal distribution p(x, y) given in equation 1, the path a that aligns x and y is a latent variable. This is also true of the resulting conditional distribution p(y | x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We explore training and inference algorithms for various classes of NFST models ( \u00a73). Classical WFSTs (Mohri et al., 2008) and BiRNN-WFSTs (Rastogi et al., 2016) use restricted scoring functions and so admit exact dynamic programming algorithms. For general NFSTs, however, we must resort to approximate computation of the model's training gradient, marginal probabilities, and predictions. In this paper, we will use sequential importance sampling methods (Lin and Eisner, 2018) , leaving variational approximation methods to future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 123, |
|
"text": "(Mohri et al., 2008)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 128, |
|
"end": 162, |
|
"text": "BiRNN-WFSTs (Rastogi et al., 2016)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 480, |
|
"text": "(Lin and Eisner, 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Defining models using FSTs has several benefits:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Output-sensitive encoding Currently popular models of p(y | x) used in machine translation and morphology include seq2seq (Sutskever et al., 2014) , seq2seq with attention (Bahdanau et al., 2015; Luong et al., 2015) , the Transformer (Vaswani et al., 2017) . These models first encode x as a vector or sequence of vectors, and then condition the generation of y on this encoding. The vector is determined from x only. This is also the case in the BiRNN-WFST (Rastogi et al., 2016) , a previous finite-state model to which we compare. By contrast, in our NFST, the state of the RNN as it reads and transduces the second half of x is influenced by the first halves of both x and y and their alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 146, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 172, |
|
"end": 195, |
|
"text": "(Bahdanau et al., 2015;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 215, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 256, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 480, |
|
"text": "(Rastogi et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Inductive bias Typically, a FST is constructed with domain knowledge (possibly by compiling a regular expression), so that its states reflect interpretable properties such as syllable boundaries or linguistic features. Indeed, we will show below how to make these properties explicit by \"marking\" the FST arcs. The NFST's path scoring function then sees these marks and can learn to take them into account. The NFST also inherits any hard constraints from the FST: if the FST omits all (x, y) paths for some \"illegal\" x, y, then p(x, y) = 0 for any parameter vector \u03b8 (a \"structural zero\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Interpretability Like a WFST, an NFST can \"explain\" why it mapped x to y in terms of a latent path a, which specifies a hard monotonic labeled alignment. The posterior distribution p(a | x, y) specifies which paths a are the best explanations (e.g., Table 5 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We conduct experiments on three tasks: grapheme-to-phoneme, phoneme-to-grapheme, and action-to-command (Bastings et al., 2018) . Our results on these datasets show that our best models can improve over neural seq2seq and previously proposed hard alignment models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 126, |
|
"text": "(Bastings et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An NFST is a pair (T , G \u03b8 ), where T is an unweighted FST with accepting paths A and G \u03b8 : A \u2192 R is a function that scores these paths. As explained earlier, we then refer top(a) = exp G \u03b8 (a) as the weight of path a \u2208 A. A weighted relation between input and output strings is given byp(x, y), which is defined to be the total weight of all paths with input string x \u2208 \u03a3 * and output string y \u2208 \u2206 * , where where \u03a3 and \u2206 are the input and output alphabets of T . The real parameter vector \u03b8 can be adjusted to obtain different weighted relations. We can normalizep to get a probability distribution as shown in equation 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neuralized FSTs", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Weighted FST. A WFST over the (+, \u00d7) semiring can be regarded as the special case in which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "G \u03b8 (a) |a| t=1 g \u03b8 (a t )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": ". This is a sum of scores assigned to the arcs in a = a 1 a 2 \u2022 \u2022 \u2022 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Marked FST. Our innovation is to allow the arcs' scores to depend on their context in the path. Now \u03b8 no longer associates a fixed score with each arc. Rather, we assume that each arc a in the FST comes labeled with a sequence of marks from a mark alphabet \u2126, as illustrated in Figure 1 . The marks reflect the FST constructor's domain knowledge about what arc a does (see \u00a74.2 below). We now define G \u03b8 (a) = G \u03b8 (\u03c9(a)), where \u03c9(a) = \u03c9(a 1 )\u03c9(a 2 ) \u2022 \u2022 \u2022 \u2208 \u2126 * is the concatenated sequence of marks from the arcs along path a.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "It is sometimes helpful to divide marks into different classes. An arc can be regarded as a possible \"edit\" that aligns an input substring with an output substring in the context of transitioning from one FST state to another. The arc's input marks describe its input substring, its output marks describe its output substring, and the remaining marks may describe other properties of the arc's aligned input-output pair or the states that it connects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Recall that an FST encodes domain knowledge. Its paths represent alignments between input and output strings, where each alignment specifies a segmentation of x and y into substrings labeled with FST states. Decorating the arcs with marks furnishes the path scoring model with domainspecific information about the alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A basic scoring architecture", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "If \u03b8 merely associated a fixed score with each mark, then the marked FST would be no more powerful than the WFST. To obtain contextual mark scores as desired, one simple architecture is a recurrent neural network:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G \u03b8 (\u03c9) |\u03c9| t=1 g \u03b8 (s t\u22121 , \u03c9 t )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "s t = f \u03b8 (s t\u22121 , \u03c9 t ), with s 0 = 0 (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where s t\u22121 \u2208 R d is the hidden state vector of the network after reading", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u03c9 1 \u2022 \u2022 \u2022 \u03c9 t\u22121 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The g \u03b8 function defines the score of reading \u03c9 t in this left context, and f \u03b8 defines how doing so updates the state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our experiments, we chose f \u03b8 to be the GRU state update function (Cho et al., 2014) . We defined g \u03b8 (s, \u03c9 t ) (Ws + b) emb(\u03c9 t ). The parameter vector \u03b8 specifies the GRU parameters, W, b, and the mark embeddings emb(\u03c9).", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 87, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One could easily substitute much fancier architectures, such as a stacked BiLSTM with attention (Tilk and Alum\u00e4e, 2016) , or a Transformer (Vaswani et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 119, |
|
"text": "(Tilk and Alum\u00e4e, 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 161, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RNN scoring.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In hopes of improving the inductive bias of the learner, we partitioned the hidden state vector into three sub-vectors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "s t = [s a t ; s x t ; s y t ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The mark scoring function f \u03b8 (s t\u22121 , \u03c9 t ) was as before, but we restricted the form of g \u03b8 , the state update function. s a t encodes all past marks and depends on the full hidden state so far:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "s a t = g a \u03b8 (s t\u22121 , \u03c9 t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "However, we make s x t encode only the sequence of past input marks, ignoring all others. Thus,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "s x t = g x \u03b8 (s x t\u22121 , \u03c9 t ) if \u03c9 t is an input mark, and s x t = s x t\u22121 otherwise. Symmetrically, s y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "t encodes only the sequence of past output marks. This architecture is somewhat like Dyer et al. (2016) , which also uses different sub-vectors to keep track of different aspects of the history.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 103, |
|
"text": "Dyer et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Partitioned hidden vectors", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A difficulty with the general model form in equation (1) is that the normalizing constant Z(T ) = a\u2208Tp (a) must sum over a large set of paths-in fact, an infinite set if T is cyclic. This sum may diverge for some values of the parameter vector \u03b8, which complicates training of the model (Dreyer, 2011) . Even if the sum is known to converge, it is in general intractable to compute it exactly. Thus, estimating the gradient of Z(T ) during training involves approximate sampling from the typically high-entropy distribution p(a). The resulting estimates are error-prone because the sample size tends to be too small and the approximate sampler is biased.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 301, |
|
"text": "(Dreyer, 2011)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "A standard solution in the WFST setting (e.g. Cotterell et al., 2014) is to use a locally normalized model, in which Z(T ) is guaranteed to be 1.1 The big summation over all paths a is replaced by small summations-which can be computed explicitlyover just the outgoing edges from a given state.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 69, |
|
"text": "Cotterell et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Formally, we define the unnormalized score of arc a i in the context of path a in the obvious way, by summing over the contextual scores of its marks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "g \u03b8 (a i ) k t=j+1 g \u03b8 (s t\u22121 , \u03c9 t )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "j = |\u03c9(a 1 ) \u2022 \u2022 \u2022 \u03c9(a i\u22121 )| and k = |\u03c9(a 1 ) \u2022 \u2022 \u2022 \u03c9(a i )|. Its normalized score is then g \u03b8,T (a i ) log expg \u03b8 (a i )/ a expg \u03b8 (a )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "where a ranges over all arcs in T (including a i itself) that emerge from the same state as a i does.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "We can now score the paths in T using", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "G \u03b8,T (a) = |a| i=1 g \u03b8,T (a i )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "This gives rise to a proper probability distribution p(a) p(a) = exp G \u03b8,T (a) over the paths of T . No global normalization constant is necessary. However, note that the scoring function now requires T as an extra subscript, because it is necessary when scoring a to identify the competitors in T of each arc a i . Thus, when p(x, y) is found as usual by summing up the probabilities of all paths in x \u2022 T \u2022 y, each path is still scored using its arcs' competitors from T . This means that each state in x \u2022 T \u2022 y must record the state in T from which it was derived.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "3 Sampling, Training, and Decoding", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local normalization", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Many algorithms for working with probability distributions-including our training and decoding", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1Provided that every state in T is co-accessible, i.e., has a path to a final state. algorithms below-rely on conditional sampling. In general, we would like to sample a path of T given the knowledge that its input and output strings fall into sets X and Y respectively.2 If X and Y are regular languages, this is equivalent to defining", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "T = X \u2022 T \u2022 Y and sampling from p(a | T ) p(a) a \u2208T p(a ) ,", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Due to the nonlinearity of G \u03b8 , the denominator of equation 6is generally intractable. If T is cyclic, it cannot even be computed by brute-force enumeration. Thus, we fall back on normalized importance sampling, directly adopting the ideas of Lin and Eisner (2018) in our more general FST setting. We employ a proposal distribution q:", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 265, |
|
"text": "Lin and Eisner (2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(a | T ) = E a\u223cq [ p(a | T ) q(a) ],", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2248 M m=1p (a (m) ) q(a (m) ) \u2022\u1e90 \u2022 I(a = a (m) ) =p(a | T ), where\u1e90 = M m =1p (a (m ) ) q(a (m ) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", and q is a locally normalized distribution over paths a \u2208 T . In this paper we further parametrize q as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "q \u03c6 (a; T ) = T t=1 q t (a t | a 1...t\u22121 ; \u03c6, T ), (8) q t (a | a :t\u22121 ; \u03c6, T ) \u221d exp(g(s t\u22121 , a t ; \u03b8, T ) + C \u03c6 ), where C \u03c6 C(s t , X, Y, \u03c6) \u2208 R, s t f (s t\u22121 , \u03c9(a))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is a compatibility function that is typically modeled using a neural network. In this paper, one the following three cases are encountered:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 X = x, is a string, and Y = \u2206 * : in this case T = x \u2022 T . We let", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C \u03c6 = C x (s t , RNN x (x, i, \u03c6); \u03c6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", where i is the length of the input prefix in a 1...t .a, RNN x (x, i, \u03c6) is the hidden state of the i-th position after reading x (not a nor \u03c9) backwards, and C x (\u2022, \u2022) is a feed-forward network that takes the concatenated vector of all arguments, and outputs a real scalar. We describe the parametrization of C x in Appendix C.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 X = \u03a3 * , and Y = y is a string: in this case T = T \u2022 y. We let C \u03c6 = C y (s t , RNN y (y, j, \u03c6); \u03c6), where j is the length of the output prefix in a 1...t .a, and RNN y , C y are similarly defined as in RNN x and C x .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 X and Y are both strings -X = x, Y = y: in this case we let", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "C \u03c6 = C xy (s t , RNN x (x, i, \u03c6), RNN y (y, j, \u03c6); \u03c6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a path prefix a :t\u22121 , q t (a | a :t\u22121 ; \u03c6, T ) is defined over arcs a such that a :t\u22121 .a is a valid path prefix in T . To optimize \u03c6 with regard to q \u03c6 , we follow (Lin and Eisner, 2018) and seek to find \u03c6 * = argmin \u03c6 KL[p||q \u03c6 ], wherep is the approximate distribution defined in equation 7, which is equivalent to maximizing the log-likelihood of q \u03c6 (a) when a is distributed according to the approximationp.", |
|
"cite_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 194, |
|
"text": "(Lin and Eisner, 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sampling from conditioned distributions with amortized inference", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this paper, we consider joint training. The loss function of our model is defined as the negative log joint probability of string pair (x, y):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L(x, y) = \u2212 log p(x, y) = \u2212 log a\u2208x\u2022T \u2022y p(a). (9)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since p is an exponential family distribution, the gradients of L can be written as (Bishop, 2006) \u2207L", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 98, |
|
"text": "(Bishop, 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(x, y) = \u2212E a\u223cp(\u2022|x\u2022T \u2022y) [\u2207 log p(a)], (10) where p(\u2022 | x \u2022 T \u2022 y) is a conditioned distribution over paths. Computing equation (10) requires sam- pling from p(\u2022 | x \u2022 T \u2022 y)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", which, as we discuss in \u00a73.1, is often impractical. We therefore approximate it with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2207 \u03b8 L(x, y) = \u2212E a\u223cp(\u2022|x\u2022T \u2022y) [\u2207 \u03b8 log p(a)] \u2248 \u2212E a\u223cp(\u2022|x\u2022T \u2022y) [\u2207 \u03b8 log p(a)] (11) = \u2212 M m=1 w (m) \u2207 \u03b8 G \u03b8 (a (m) ), (12)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where q is a proposal distribution parametrized as in equation 8 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w (m) \u221d exp G \u03b8 (a (m) ) q(a (m) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ", M m=1 w (m) = 1. Pseudocode for calculating equation 12is listed in Algorithm 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Compute approximate gradient for updating G \u03b8 Require:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "G \u03b8 : A \u2192 R is an NFST scoring func- tion, q is a distribution over paths, M \u2208 N is the sample size 1: function G -G (G \u03b8 , M , q) 2: for m in 1 . . . M do 3: a (m) \u223c q 4:w (m) \u2190 exp G \u03b8 (a (m) ) q(a) 5: end for 6:\u1e90 \u2190 M m=1w (m) 7: for m in 1 . . . M do 8: w (m) \u2190w (m) Z 9: end for 10: return \u2212 M m=1 w (m) \u2207 \u03b8 G \u03b8 (a (m) ) 11: end function", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Besides finding good paths in a conditioned distribution as we discuss in \u00a73.1, we are also often interested in finding good output strings, which is conventionally referred to as the decoding problem, which we define to be finding the best output string", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y * argmax y\u2208L(Y ) p Y (y | T ), where p Y (y | T ) a\u2208T \u2022yp (a) a \u2208T p(a ) .", |
|
"eq_num": "(13)" |
|
} |
|
], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "y * argmax yPY (y | T ) is a consistent estimator of y * , which can directly be used to find the best string. However, making this estimate accurate might be expensive: it requires sampling many paths in the machine T , which is usually cyclic, and therefore has infinitely many more paths, than T \u2022 y k , which has finitely many paths when A is acyclic. On the other hand, for the task of finding the best string among a pool candidates, we do not need to compute (or approximate) the denominator in equation 13, since", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "y * = argmax y\u2208L(Y ) a\u2208T \u2022yp (a).", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As in the case for paths, the language L(Y ) is usually infinitely large. However given an output candidate y k \u2208 L \u2286 L(Y ), we can approximate the summation in equation (14) using importance sampling:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a\u2208T \u2022y kp (a) = E a\u223cq(\u2022|T \u2022y k ) [p (a) q(a | T \u2022 y k ) ],", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Algorithm 2 Training procedure for G \u03b8 . See Appendix C.2 for implementation details.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Require: (T , G \u03b8 ) is an NFST, D = {(x 1 , y 1 ) . . . (x |D| , y |D| )}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "is the training dataset, LR : N \u2192 R is a learning rate scheduler, \u03b8 0 are the initial parameters of G \u03b8 , M is a given sample size, maxEpoch \u2208 N is the number of epochs to train for 1: procedure T", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(T , G \u03b8 , D, LR, \u03b8 0 , M , maxEpochs) 2:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "for epoch \u2208 [1 . . . maxEpochs] do 3:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "for (x i , y i ) \u2208 shuffle(D) do 4: T \u2190 x i \u2022 T \u2022 y i 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Construct distribution q(\u2022 | T ) according to equation 86:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "u \u2190 G -G (G \u03b8 , M, q) (listed in Algorithm 1) 7: \u03b8 \u2190 \u03b8 \u2212 LR(epoch) \u00d7 u 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(Optional) update the parameters of q(\u2022 | T ). 8. When L is finitely large, we reduce the decoding task into a reranking task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To populate L , one possibility is to marginalize over paths in the approximate distributionp(a | T ) discussed in \u00a73.1 to obtain an estimatep Y (y | T ), and use its support as L . Note that it's possible to populate the candidate pool in other ways, each with its advantages and drawbacks: for example, one can use a top-k path set from a weighted (Markovian) FST. This approach guarantees exact computation, and the pool quality would no longer depend on the qualities of the smoothing distribution q \u03c6 . However it is also a considerably much weaker model and may yield uninspiring candidates. In the common case where the conditioned machine T = X \u2022 T \u2022 Y has X = x \u2208 \u03a3 * as the input string, and Y is the universal acceptor that accepts \u2206 * , one can obtain a candidate pool from seq2seq models: seq2seq models can capture long distance dependencies between input and output strings, and are typically fast to train and decode from. However they are not applicable in the case where L(Y ) = \u2206 * . Experimental details of decoding are further discussed in \u00a74.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding most probable strings", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our experiments mainly aim to: (1) show the effectiveness of NFSTs on transduction tasks; (2) illustrate that how prior knowledge can be introduced into NFSTs and improve the performance; (3) demonstrate the interpretability of our model. Throughout, we experiment on three tasks: (i) grapheme-to-phoneme, (ii) phoneme-to-grapheme, and (iii) actions-to-commands. We compare with competitive string transduction baseline models in these tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We carry out experiments on three string transduction tasks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Grapheme-to-phoneme and phoneme-tographeme (G2P/P2G) refer to the transduction between words' spelling and phonemic transcription. English has a highly irregular orthography (Venezky, 2011), which necessitates the use of rich models for this task. We use a portion of the standard CMUDict dataset: the Sphinx-compatible version of CMUDict (Weide, 2005) . As for metrics, we choose widely used exact match accuracy and edit distance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 352, |
|
"text": "(Weide, 2005)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Action-to-command (A2C) refers to the transduction between an action sequence and imperative commands. We use NACS (Bastings et al., 2018) in our experiment. As for metrics, we use exact match accuracy (EM). Note that the in A2C setting, a given input can yield different outputs, e.g. I_JUMP I_WALK I_WALK corresponds to both \"jump and walk twice\" and \"walk twice after jump\". NACS is a finite set of action-command pairs; we consider a predicted command to be correct if it is in the finite set and its corresponding actions is exactly the input. We evaluate on the length setting proposed by Bastings et al. (2018) , where we train on shorter sequences and evaluate on longer sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 138, |
|
"text": "(Bastings et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 617, |
|
"text": "Bastings et al. (2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks and datasets", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "NFSTs require an unweighted FST T which defines a scaffold for the relation it recognizes. In this paper we experiment with two versions of T : the first is a simple 'general' design T 0 , which contains only three states s {0,1,2} , where the only arc between q 0 and q 1 consumes the mark <BOS>; and the only arc between q 1 and q 2 consumes the mark <EOS>. T 0 has exactly one accepting state, which is q 2 . To ensure that T 0 defines relation for all possible string pairs (x, y) \u2208 \u03a3 * \u00d7 \u2206 * , we add all arcs of the form", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 231, |
|
"text": "s {0,1,2}", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "FST designs", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "a = (s 1 , s 1 , \u03c9, \u03c3, \u03b4), \u2200(\u03c3, \u03b4) \u2208 \u03a3 \u00d7 \u2206 to T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FST designs", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To recognize transduction rules defined in the Wikipedia English IPA Help page, we define T IPA , which has all states and arcs of T 0 , and additional states and arcs to handle multi-grapheme and multiphoneme transductions defined in the IPA Help:3 for example, the transduction th \u2192 T is encoded as two arcs (s 1 , s 3 , \u03c9, t, T) and (s 3 , s 1 , \u03c9, h, \u03b5). Because of the lack of good prior knowledge that can be added to A2C experiments, we only use general FSTs in those experiments for such experiments. Nor do we encode special marks that we are going to introduce below.4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FST designs", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As with regular WFSTs, the arcs can often be handengineered to incorporate prior knowledge. Recall that as we describe in \u00a72.2, each arc is associated with a mark sequence. In this paper, we will always derive the mark sequence on an arc a = (s , s, \u03c9 , \u03c3, \u03b4) of the transducer T as \u03c9 = [\u03c3, \u03c9 , \u03b4, s], where \u03c9 \u2208 \u2126 * can be engineered to reflect FST-and application-specific properties of a path, such as the IPA Help list we mentioned earlier. One way to encode such knowledge into mark sequences is to have special mark symbols in mark sequences for particular transductions. In this paper we experiment with two schemes of marks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "\u2022 IPA Help (IPA). We define the IPA mark \u03c9 IPA = {C | V}, where the symbol C indicates that this arc is part of a transduction rule listed in the consonant section of the Wikipedia English IPA Help page. Similarly, the mark V indicates that the transduction rule is listed in the vowel section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "3https://en.wikipedia.org/wiki/Help: IPA/English 4The NACS dataset was actually generated from a regular transducer, which we could in principle use, but doing so would make the transduction fully deterministic and probably not interesting/hard enough.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "\u2022 Phoneme Classes (P ). We define P marks \u03c9 P = \u03a6(\u03b4), where \u03a6 is a lookup function that returns the phoneme class of \u03b4 defined by the CMUDict dataset.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "In this paper we experiment with the following three FST and mark configurations for G2P/P2G experiments:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "\u2022 -IPA-P in which case \u03c9 = \u2205 for all arcs. T = T 0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "\u2022 +IPA-P in which case \u03c9 = [\u03c9 IPA ] when the transduction rule is found in the IPA Help list, otherwise \u03c9 = \u2205. T = T IPA .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "\u2022 +IPA+P in which case \u03c9 = [\u03c9 IPA \u03c9 P ] when the transduction rule is found in the IPA Help list, otherwise \u03c9 = [\u03c9 P ]. T = T IPA .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "As we said earlier, we only use T = T 0 with no special marks for A2C experiments. Experimental results on these different configurations are in \u00a75.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Design of mark sequences", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We experiment with the following methods to decode the most probable strings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Approximate Posterior (AP). We approximate the posterior distribution over output stringsp Y (y | T ), and pick\u0177 * = argmax ypY (y | T ) as the output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Reranking AP. As we discuss in \u00a73.3, improving\u0177 * by taking more path samples in T may be expensive. The reranking method uses the support ofp Y as a candidate pool L , and for each y k \u2208 L we estimate equation 15using path samples in T \u2022 y k .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Reranking External. This decoding method uses k-best lists from external models. In this paper, we make use of sequence-to-sequence baseline models as the candidate pool L .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Reranking AP + External. This decoding method uses the union of the support ofp Y and k-best lists from the sequence-to-sequence baseline models as the candidate pool L .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we take 128 path samples per candidate for all Reranking methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding methods", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We compare NFSTs against the following baselines:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "BiRNN-WFSTs proposed by Rastogi et al. (2016) , were weighted finite-state transducers whose weights encode input string features by the use of recurrent neural networks. As we note in Table 1 , they can be seen as a special case of NFSTs, where the Markov property is kept, but where exact inference is still possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 45, |
|
"text": "Rastogi et al. (2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 192, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Seq2seq models are the standard toolkit for transduction tasks. We make use of the attention mechanism proposed by Luong et al. (2015) , which accomplishes 'soft alignments' that do not enforce a monotonic alignment constraint.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 134, |
|
"text": "Luong et al. (2015)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Neuralized IBM Model 1 is a character transduction model recently proposed by Wu et al. (2018) , which marginalizes over non-monotonic hard alignments between input and output strings. Like (Luong et al., 2015) , they did not enforce monotonic alignment constraints; but unlike them, they did not make use of the input feeding mechanism,6 where past alignment information is fed back into the RNN decoder. This particular omission allows (Wu et al., 2018) to do exact inference with a dynamic programming algorithm. All baseline systems are tuned on the validation sets. The seq2seq models employ GRUs, with word and RNN embedding size = 500 and a dropout rate of 0.3. They are trained with the Adam optimizer (Kingma and Ba, 2014) over 50 epochs. The Neuralized IBM Model 1 models are tuned as described in (Wu et al., 2018) . Table 2 indicates that BiRNN-WFST models (Rastogi et al., 2016) perform worse than other models. Their Markovian assumption helps enable dynamic programming, but restricts their expressive power, which greatly hampers the BiRNN-WFST's performance on the P2G/G2P task. The NACS task also relies highly on output-output interactions, and BiRNN-WFST performs very poorly there.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 94, |
|
"text": "Wu et al. (2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 210, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 455, |
|
"text": "(Wu et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 808, |
|
"end": 825, |
|
"text": "(Wu et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 869, |
|
"end": 891, |
|
"text": "(Rastogi et al., 2016)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 828, |
|
"end": 835, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "6We discuss this further in Appendix B.1. Table 2 : Average exact match accuracy (%, higher the better) and edit distance (lower the better) on G2P and P2G as well as exact match accuracy on NACS.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 49, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does losing the Markov property help?", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "Comparison between our models with baselines. For NFST models, we make use of the Reranking AP decoding method described in \u00a74.2. Table 3 shows results from different decoding methods on the G2P/P2G tasks, configuration +IPA+P", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 137, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Does losing the Markov property help?", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": ". AP performs significantly worse than Reranking AP, suggesting that the estimat\u00ea y * suffers from the variance problem. Interestingly, of decoding methods that employ external models, Reranking External performs better than Reranking AP + External, despite having a smaller candidate pool. We think there is some product-of-experts effect in Reranking External since the external model may not be biased in the same way as our model is. But such benefits vanish when candidates from AP are also in the pool -our learned approximation learns the bias in the model -and hence the worse performance in Reranking AP + External. This suggests an interesting regularization trick in practice: populating the candidate pool using external models to hide our model bias. However when we compare our method against non-NFST baseline methods we do not make use of such tricks, to ensure a more fair comparison. 32.0 1.309 1.303 Table 3 : Average exact match accuracy (%, higher the better) and edit distance (lower the better) on G2P and P2G. The effectiveness of different decoding methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 919, |
|
"end": 926, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effectiveness of proposed decoding methods", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "In Table 4 we see that combining both +IPA and +P improves model generalizability over the general FST (-IPA -P ). We also note that using only the IPA marks leads to degraded performance Table 4 : Average exact match accuracy (%, higher the better) and edit distance (lower the better) on G2P and P2G. The effectiveness of different FST designs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 111, |
|
"text": "FST (-IPA -P", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prior knowledge: does it help?", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "compared to the general FST baseline. This is a surprising result -one explanation is the IPA marks are not defined on all paths that transduce the intended input-output pairs: NFSTs are capable of recognizing phoneme-grapheme alignments in different paths,7 but only one such path is marked by +IPA. But we leave a more thorough analysis to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Prior knowledge: does it help?", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Recently, there has been work relating finite-state methods and neural architectures. For example, and Peng et al. (2018) have shown the equivalence between some neural models and WFSAs. The most important differences of our work is that in addition to classifying strings, NFSTs can also transduce strings. Moreover, NFSTs also allow free topology of FST design, and breaks the Markovian assumption. In addition to models we compare against in \u00a74, we note that (Aharoni and Goldberg, 2017; Deng et al., 2018) are also similar to our work; in that they also marginalize over latent alignments, although they do not enforce the monotonicity constraint. Work that discusses globally normalized sequence models are relevant to our work. In this paper, we discuss a training strategy that bounds the partition function; other ways to train a globally normalized model (not necessarily probabilistic) include (Wiseman and Rush, 2016; Andor et al., 2016) . On the other hand, our locally normalized FSTs bear resemblance to (Dyer et al., 2016) , which was also locally normalized, and also employed importance sampling for training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 121, |
|
"text": "Peng et al. (2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 462, |
|
"end": 490, |
|
"text": "(Aharoni and Goldberg, 2017;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 509, |
|
"text": "Deng et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 904, |
|
"end": 928, |
|
"text": "(Wiseman and Rush, 2016;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 929, |
|
"end": 948, |
|
"text": "Andor et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1018, |
|
"end": 1037, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Neural finite state transducers (NFSTs) are able to model string pairs, considering their monotonic alignment but also enjoying RNNs' power to handle non-finite-state phenomena. They compete favor-7This is discussed further in Appendix B.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "ably with state-of-the-art neural models on transduction tasks. At the same time, it is easy to inject domain knowledge into NFSTs for inductive bias, and they offer interpretable paths. In this paper, we have used rather simple architectures for our RNNs; one could experiment with multiple layers and attention. One could also experiment with associating marks differently with arcs-the marks are able to convey useful domain information to the RNNs. For example, in a P2G or G2P task, all arcs that cross a syllable boundary might update the RNN state using a syllable mark. We envision using regular expressions to build the NFSTs, and embedding marks in the regular expressions as a way of sending useful features to the RNNs to help them evaluate paths.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In this paper, we have studied NFSTs as standalone systems. But as probabilistic models, they can be readily embedded in a bigger picture: it should be directly feasible to incorporate a globally/locally normalized NFST in a larger probabilistic model (Finkel and Manning, 2009; Chiang et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 278, |
|
"text": "(Finkel and Manning, 2009;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 299, |
|
"text": "Chiang et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The path weights of NFSTs could be interpreted simply as scores, rather than log-probabilities. One would then decode by seeking the 1-best path with input x, e.g., via beam search or Monte Carlo Tree Search. In this setting, one might attempt to train the NFST using methods similar to the max-violation structured perceptron or the structured SVM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "PathsP (a | x, y) /mA\u00f4S/ marche :m m:A a:\u00f4 r:S c: h: e: 96.5% :m m:A a:\u00f4 r: :S c: h: e: 2.5% :m m:A a: :\u00f4 r:S c: h: e: 1.0%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "/OnslOt/ onslaught :O o:n n: :s s:l l:O a: u: g: h:t t: 76.3% :O o:n n:s s:l l:O a: u: g: h:t t:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "21.4% :O o:n n: :s s:l l:O a: u: g: h: :t t:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1.5%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "/wIlINh@m/ Willingham :w W:I i:l l: l: :I i:N n: g: :h h:@ a: :m m: 40.1% :w W:I i:l l: l:I i:N n: g: :h h:@ a: :m m: 36.6% :w W:I i:l l: l:I i:N n: g:h h:@ a: :m m: 7.4% /gezI/ ghezzi :g g: h:e e:z z: I:z i: 98.8% :g g:e h: e:z z:I z: i:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1.2% arcs, which is beyond the capability of of ordinary WFSTs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "C Implementation Details", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input / Output", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As mentioned before, the type of RNN that we use is GRU. The GRU parameterizing G \u03b8 has 500 hidden states. The embedding sizes of tokens, including the input symbol, output symbol and states, and marks are all 500. During inference we make use of proposal distributions q \u03c6 (a | T ), where T \u2208 {x \u2022 T , T \u2022 y, x \u2022 T \u2022 y}. All RNNs used to parametrize q \u03c6 are also GRUs, with 125 hidden states. q \u03c6 makes use of input/output embeddings independent from G \u03b8 , which also have size 125 in this paper. The feed-forward networks C x,y,xy are parametrized by 3-layer networks, with ReLU as the activation function of the first two layers. The output dimension sizes of the first and second layers are D /2 and D /4 , where D is the input vector dimension size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.1 Model parametrization details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use stochastic gradient descent (SGD) to train G \u03b8 . For each example, we compute the gradient using normalized importance sampling over an ensemble of 512 particles (paths), the maximum that we could compute in parallel. By using a large ensemble, we reduce both the bias (from normalized importance sampling) and the variance of the gradient estimate; we found that smaller ensembles did not work as well. Thus, we used only one example per minibatch. We train the 'clamped' proposal distribution q \u03c6 (a | x \u2022 T \u2022 y) differently from the 'free' ones q \u03c6 (a | x \u2022 T ) and q \u03c6 (a | T \u2022 y). The clamped distribution is trained alternately with G \u03b8 , as listed in Algorithm 2. We evaluate on the development dataset at the end of each epoch using the Reranking External method described in \u00a74.3. When the EM accuracy stops improving, we fix the parameters of G \u03b8 and start training q \u03c6 (x \u2022 T ) and q \u03c6 (T \u2022 y) on the inclusive KL divergence objective function, using methods described in (Lin and Eisner, 2018) . We then initialize the free distributions' RNNs using those of the clamped distributions. We train the free proposal distributions for 30 epochs, and evaluate on the development dataset at the end of each epoch. Results from the best epochs are reported in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 990, |
|
"end": 1012, |
|
"text": "(Lin and Eisner, 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C.2 Training procedure details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2When X or Y is larger than a single string, it is commonly all of \u03a3 * or \u2206 * respectively, in which case conditioning on it gives no information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5https://github.com/cmusphinx/cmudict/blob/ master/cmudict.phones", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been generously supported by a Google Faculty Research Award and by Grant No. 1718846 from the National Science Foundation, both to the last author. Hao Zhu is supported by Tsinghua University Initiative Scientific Research Program. We thank Shijie Wu for providing us IBM Neuralized Model 1 experiment results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A Finite-state transducers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A relation is a set of pairs-in this paper, a subset of \u03a3 * \u00d7 \u2206 * , so it relates strings over an \"input\" alphabet \u03a3 to strings over an \"output\" alphabet \u2206.A weighted relation is a function R that maps any string pair (x, y) to a weight in R \u22650 .We say that the relation R is rational if R can be defined by some weighted finite-state transducer (FST) T . As formalized in Appendix A.3, this means that R(x, y) is the total weight of all accepting paths in T that are labeled with (x, y) (which is 0 if there are no such accepting paths). The weight of each accepting path in T is given by the product of its arc weights, which fall in R >0 .The set of pairs support(R) {(x, y) : R(x, y) > 0} is then said to be a regular relation because it is recognized by the unweighted FST obtained by dropping the weights from T . In this paper, we are interested in defining non-rational weighting functions R with this same regular support set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Rational Relations", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We briefly review finite-state transducers (FSTs). Formally, an FST is a tupleis the set of weighted arcs\u2022 I \u2286 Q is the set of initial states (conventionally |I| = 1)\u2022 F \u2286 Q is the set of final states Let a = a 1 . . . a T (for T \u2265 0) be an accepting path in T 0 , that is, eachWe say that the input and output strings of a are", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Finite-state transducers", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Weighted FSTs (WFSTs) are defined very similarly to FSTs. A WFST is formally defined as a 6-tuple, just like an (unweighted) FST: T = (\u03a3, \u2206, Q, A, I, F ), with arcs carrying weights:We also define the weight of a to be w(a) T i=1 \u03ba i \u2208 R. The weight of the entire WFST T is defined as the total weight (under \u2295) of all accepting paths:More interestingly, the weight T [x, y] of a string pair x \u2208 \u03a3 * , y \u2208 \u2206 * is given by similarly summing w(a) over just the accepting paths a whose input string is x and output string is y.B More analysis on the effectiveness of NFSTs B.1 Does feeding alignments into the decoder help?In particular, we attribute our models' outperforming Neuralized IBM Model 1 to the fact that a complete history of past alignments is remembered in the RNN state. (Wu et al., 2018) noted that in character transduction tasks, past alignment information seemed to barely affect decoding decisions made afterwards. However, we empirically find that there is performance gain by explicitly modeling past alignments. This also shows up in our preliminary experiments with non-input-feeding seq2seq models, which resulted in about 1% of lowered accuracy and about 0.1 longer edit distance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 784, |
|
"end": 801, |
|
"text": "(Wu et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.3 Real-valued weighted FSTs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The model is not required to learn transduction rules that conform to our linguistic knowledge. However, we expect that a well-performing one would tend to pick up rules that resemble what we know. To verify this, we obtain samples (listed in Table 4 ) fromp(a | x, y) using the importance sampling algorithm described in \u00a73.3. We find that our NFST model has learned to align phonemes and graphemes, generating them alternately. It has no problem picking up obvious pairs in the English orthography (e.g. (S, c h), and (N, n g)). We also find evidence that the model has picked up how context affects alignment: for example, the model has learned that the bigram 'gh' is pronounced differently in different contexts: in 'onslaught,' it is aligned with O in the sequence 'augh;' in 'Willingham,' it spans over two phonemes N h; and in 'ghezzi,' it is aligned with the phoneme g. We also find that our NFST has no problem learning phoneme-grapheme alignments that span over two", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 250, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B.2 Interpretability of learned paths", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Morphological inflection generation with hard monotonic attention", |
|
"authors": [ |
|
{ |
|
"first": "Roee", |
|
"middle": [], |
|
"last": "Aharoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Morphological inflection generation with hard monotonic attention. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Globally normalized transition-based neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Andor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Alberti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Presta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuzman", |
|
"middle": [], |
|
"last": "Ganchev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normal- ized transition-based neural networks. In Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Jump to better conclusions: Scan both left and right", |
|
"authors": [ |
|
{ |
|
"first": "Joost", |
|
"middle": [], |
|
"last": "Bastings", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joost Bastings, Marco Baroni, Jason Weston, Kyunghyun Cho, and Douwe Kiela. 2018. Jump to better conclu- sions: Scan both left and right. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 47-55.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Rational Series and Their Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Berstel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jr", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christophe", |
|
"middle": [], |
|
"last": "Reutenauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean Berstel, Jr. and Christophe Reutenauer. 1988. Ra- tional Series and Their Languages. Springer-Verlag, Berlin, Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Pattern recognition and machine learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bishop", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher M Bishop. 2006. Pattern recognition and machine learning.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Bayesian inference for finite-state transducers", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Pauls", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujith", |
|
"middle": [], |
|
"last": "Ravi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "447--455", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang, Jonathan Graehl, Kevin Knight, Adam Pauls, and Sujith Ravi. 2010. Bayesian inference for finite-state transducers. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 447-455. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u00c7aglar", |
|
"middle": [], |
|
"last": "G\u00fcl\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using rnn encoder-decoder for statistical machine translation. In EMNLP, pages 1724-1734. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Stochastic contextual edit distance and probabilistic FSTs", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanyun", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "625--630", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 625-630, Baltimore.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Latent alignment and variational attention", |
|
"authors": [ |
|
{ |
|
"first": "Yuntian", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Demi", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Advances in Neural Information Processing Systems 31", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9735--9747", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 9735-9747. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Non-Parametric Model for the Discovery of Inflectional Paradigms from Plain Text Using Graphical Models over Strings", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Dreyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Dreyer. 2011. A Non-Parametric Model for the Discovery of Inflectional Paradigms from Plain Text Using Graphical Models over Strings. Ph.D. thesis, Johns Hopkins University, Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In HLT-NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Hierarchical bayesian domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel and Christopher D Manning. 2009. Hierarchical bayesian domain adaptation. In Proceed- ings of Human Language Technologies: The 2009", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "602--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 602-610. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Neural particle smoothing for sampling from conditional sequence models", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Cheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "929--941", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Cheng Lin and Jason Eisner. 2018. Neural particle smoothing for sampling from conditional sequence models. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 929-941, New Orleans.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Effective approaches to attention-based neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1412-1421.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Speech recognition with weighted finite-state transducers", |
|
"authors": [ |
|
{ |
|
"first": "Mehryar", |
|
"middle": [], |
|
"last": "Mohri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Riley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Springer Handbook of Speech Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "559--584", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mehryar Mohri, Fernando Pereira, and Michael Riley. 2008. Speech recognition with weighted finite-state transducers. In Springer Handbook of Speech Pro- cessing, pages 559-584. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Rational recurrences. In EMNLP", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Weighting finite-state transductions with neural context", |
|
"authors": [ |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "623--633", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 623-633, San Diego. 11 pages. Supplementary material (1 page) also available.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Finite-State Language Processing", |
|
"authors": [], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emmanuel Roche and Yves Schabes, editors. 1997. Finite-State Language Processing. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Sopa: Bridging cnns, rnns, and weighted finite-state machines. CoRR", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Thomson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Sopa: Bridging cnns, rnns, and weighted finite-state machines. CoRR, abs/1805.06061.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, pages 3104-3112, Cambridge, MA, USA. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Bidirectional recurrent neural network with attention mechanism for punctuation restoration", |
|
"authors": [ |
|
{ |
|
"first": "Ottokar", |
|
"middle": [], |
|
"last": "Tilk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanel", |
|
"middle": [], |
|
"last": "Alum\u00e4e", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "INTERSPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2016. Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In INTERSPEECH.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The structure of English orthography", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Venezky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "82", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard L Venezky. 2011. The structure of English orthography, volume 82. Walter de Gruyter.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The carnegie mellon pronouncing dictionary", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Weide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Weide. 2005. The carnegie mellon pronouncing dictionary [cmudict. 0.6].", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Sequenceto-sequence learning as beam-search optimization", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence- to-sequence learning as beam-search optimization. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Hard non-monotonic attention for character-level transduction", |
|
"authors": [ |
|
{ |
|
"first": "Shijie", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4425--4438", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level trans- duction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4425-4438.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "(discussed in \u00a73.1,) a (1) . . . a (M ) \u223c q are i.i.d.samples of paths in x \u2022 T \u2022 y, and w (m) is the importance weight of the m-th sample satisfying" |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "where q(\u2022 | T \u2022 y k ) is a proposal distribution over paths in T \u2022 y k . In this paper we parametrize q(\u2022 | T \u2022y k ) following the definition in equation" |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Most probable paths from x \u2022 T \u2022 y under the approximate posterior distribution.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |