|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:29:55.249146Z" |
|
}, |
|
"title": "Learning Context-Free Languages with Nondeterministic Stack RNNs", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Dusell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Notre Dame", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Notre Dame", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a differentiable stack data structure that simultaneously and tractably encodes an exponential number of stack configurations, based on Lang's algorithm for simulating nondeterministic pushdown automata. We call the combination of this data structure with a recurrent neural network (RNN) controller a Nondeterministic Stack RNN. We compare our model against existing stack RNNs on various formal languages, demonstrating that our model converges more reliably to algorithmic behavior on deterministic tasks, and achieves lower cross-entropy on inherently nondeterministic tasks.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a differentiable stack data structure that simultaneously and tractably encodes an exponential number of stack configurations, based on Lang's algorithm for simulating nondeterministic pushdown automata. We call the combination of this data structure with a recurrent neural network (RNN) controller a Nondeterministic Stack RNN. We compare our model against existing stack RNNs on various formal languages, demonstrating that our model converges more reliably to algorithmic behavior on deterministic tasks, and achieves lower cross-entropy on inherently nondeterministic tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Although recent neural models of language have made advances in learning syntactic behavior, research continues to suggest that inductive bias plays a key role in data efficiency and human-like syntactic generalization (van Schijndel et al., 2019; Hu et al., 2020) . Based on the long-held observation that language exhibits hierarchical structure, previous work has proposed coupling recurrent neural networks (RNNs) with differentiable stack data structures (Joulin and Mikolov, 2015; Grefenstette et al., 2015) to give them some of the computational power of pushdown automata (PDAs), the class of automata that recognize context-free languages (CFLs). However, previously proposed differentiable stack data structures only model deterministic stacks, which store only one version of the stack contents at a time, theoretically limiting the power of these stack RNNs to the deterministic CFLs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 247, |
|
"text": "(van Schijndel et al., 2019;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 264, |
|
"text": "Hu et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 486, |
|
"text": "(Joulin and Mikolov, 2015;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 487, |
|
"end": 513, |
|
"text": "Grefenstette et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A sentence's syntactic structure often cannot be fully resolved until its conclusion (if ever), requiring a human listener to track multiple possibilities while hearing the sentence. Past work in psycholinguistics has suggested that models that keep multiple candidate parses in memory at once can explain human reading times better than models which assume harsher computational constraints. This ability also plays an important role in calculating expectations that facilitate more efficient language processing (Levy, 2008) . Current neural language models do not track multiple parses, if they learn syntax generalizations at all McCoy et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 514, |
|
"end": 526, |
|
"text": "(Levy, 2008)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 634, |
|
"end": 653, |
|
"text": "McCoy et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose a new differentiable stack data structure that explicitly models a nondeterministic PDA, adapting an algorithm by Lang (1974) and reformulating it in terms of tensor operations. The algorithm is able to represent an exponential number of stack configurations at once using cubic time and quadratic space complexity. As with existing stack RNN architectures, we combine this data structure with an RNN controller, and we call the resulting model a Nondeterministic Stack RNN (NS-RNN).", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 136, |
|
"text": "Lang (1974)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We predict that nondeterminism can help language processing in two ways. First, it will improve trainability, since all possible sequences of stack operations contribute to the objective function, not just the sequence used by the current model. Second, it will improve expressivity, as it is able to model concurrent parses in ways that a deterministic stack cannot. We demonstrate these claims by comparing the NS-RNN to deterministic stack RNNs on formal language modeling tasks of varying complexity. To show that nondeterminism aids training, we show that the NS-RNN achieves lower cross-entropy, in fewer parameter updates, on some deterministic CFLs. To show that nondeterminism improves expressivity, we show that the NS-RNN achieves lower crossentropy on nondeterministic CFLs, including the \"hardest context-free language\" (Greibach, 1973) , a language which is at least as difficult to parse as any other CFL and inherently requires nondeterminism. Our code is available at https://github. com/bdusell/nondeterministic-stack-rnn.", |
|
"cite_spans": [ |
|
{ |
|
"start": 833, |
|
"end": 849, |
|
"text": "(Greibach, 1973)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In all differentiable stack-augmented networks that we are aware of (including ours), a network called the controller, which is some kind of RNN (typically an LSTM), is augmented with a differentiable stack, which has no parameters of its own. At each time step, the controller emits weights for various stack operations, which at minimum include push and pop. To maintain differentiability, the weights need to be continuous; different designs for the stack interpret fractionally-weighted operations differently. The stack then executes the fractional operations and produces a stack reading, which is a vector that represents the top of the updated stack. The stack reading is used as an extra input to the next hidden state update.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "Designs for differentiable stacks have proceeded generally along two lines. One approach, which we call superposition (Joulin and Mikolov, 2015) , treats fractional weights as probabilities. The other, which we call stratification (Sun et al., 1995; Grefenstette et al., 2015) , treats fractional weights as \"thicknesses.\" Superposition In the model of Joulin and Mikolov (2015) , the controller emits at each time step a probability distribution over three stack operations: push a new vector, pop the top vector, and no-op. The stack simulates all three operations at once, setting each stack element to the weighted interpolation of the elements above, at, and below it in the previous time step, weighted by push, noop, and pop probabilities respectively. Thus, each stack element is a superposition of possible values for that element. Because stack elements depend only on a fixed number of elements from the previous time step, the stack update can largely be parallelized. Yogatama et al. (2018) developed an extension to this model that allows a variable number of pops per time step, up to a fixed limit K. Suzgun et al. (2019) also proposed a modification of the controller parameterization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 144, |
|
"text": "(Joulin and Mikolov, 2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 249, |
|
"text": "(Sun et al., 1995;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 276, |
|
"text": "Grefenstette et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 378, |
|
"text": "Joulin and Mikolov (2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 981, |
|
"end": 1003, |
|
"text": "Yogatama et al. (2018)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1117, |
|
"end": 1137, |
|
"text": "Suzgun et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "Stratification The model proposed by Sun et al. (1995) and later studied by Grefenstette et al. (2015) takes a different approach, assigning a strength between 0 and 1 to each stack element. If the stack elements were the layers of a cake, then the strengths would represent the thickness of each layer. At each time step, the controller emits a push weight between 0 and 1 which determines the strength of a new vector pushed onto the stack, and a pop weight between 0 and 1 which determines how much to slice off the top of the stack. The stack reading is computed by examining the top layer of unit thickness and interpolating the vectors proportional to their strengths. This relies on min and max operations, which can have zero gradients. In practice, the model can get trapped in local optima and requires random restarts (Hao et al., 2018) . This model also affords less opportunity for parallelization because of the interdependence of stack elements within the same time step. Hao et al. (2018) proposed an extension that uses memory buffers to allow variable-length transductions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 54, |
|
"text": "Sun et al. (1995)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 76, |
|
"end": 102, |
|
"text": "Grefenstette et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 829, |
|
"end": 847, |
|
"text": "(Hao et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 987, |
|
"end": 1004, |
|
"text": "Hao et al. (2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "Nondeterminism In all the above models, the stack is essentially deterministic in design. In order to recognize a nondeterministic CFL like {ww R } from left to right, it must be possible, at each time step, for the stack to track all prefixes of the input string read so far. None of the foregoing models, to our knowledge, can represent a set of possiblities like this. Even for deterministic CFLs, this has consequences for trainability; at each time step, training can only update the model from the vantage point of a single stack configuration, making the model prone to getting stuck in local minima.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "To overcome this weakness, we propose incorporating a nondeterministic stack, which affords the model a global view of the space of possible ways to use the stack. Our controller emits a probability distribution over stack operations, as in the superposition approach. However, whereas superposition only maintains the per-element marginal distributions over the stack elements, we propose to maintain the full distribution over the whole stack contents. We marginalize the distribution as late as possible, when the controller queries the stack for the current top stack symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "In the following sections, we explain our model and compare it against those of Joulin and Mikolov (2015) and Grefenstette et al. (2015) . Despite taking longer in wall-clock time to train, our model learns to solve the tasks optimally with a higher rate of success.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 105, |
|
"text": "Joulin and Mikolov (2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 110, |
|
"end": 136, |
|
"text": "Grefenstette et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2 Background and Motivation", |
|
"sec_num": "508" |
|
}, |
|
{ |
|
"text": "In this section, we give a definition of nondeterministic PDAs ( \u00a73.2), describe how to process strings with nondeterministic PDAs in cubic time ( \u00a73.3), and reformulate this algorithm in terms of tensor operations ( \u00a73.4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pushdown Automata", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let be the empty string. Let 1[\u03c6] be 1 when proposition \u03c6 is true, 0 otherwise. If A is a matrix, let A i: and A : j be the ith row and jth column, respectively, and define analogous notation for tensors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A weighted pushdown automaton (PDA) is a tuple M = (Q, \u03a3, \u0393, \u03b4, q 0 , \u22a5), where:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Q is a finite set of states.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 \u03a3 is a finite input alphabet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 \u0393 is a finite stack alphabet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 \u03b4 : Q \u00d7 \u0393 \u00d7 \u03a3 \u00d7 Q \u00d7 \u0393 * \u2192 R \u22650 maps transitions,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "which we write as q, x a \u2212 \u2192 r, y, to weights. \u2022 q 0 \u2208 Q is the start state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 \u22a5 \u2208 \u0393 is the initial stack symbol.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this paper, we do not allow non-scanning transitions (that is, those where a = ). Although this does not reduce the weak generative capacity of PDAs (Autebert et al., 1997) , it could affect their ability to learn; we leave exploration of nonscanning transitions for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 175, |
|
"text": "(Autebert et al., 1997)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For simplicity, we will assume that all transitions have one of the three forms:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "q, x a \u2212 \u2192 r, xy push y on top of x q, x a \u2212 \u2192 r, y replace x with y q, x a \u2212 \u2192 r, pop x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This also does not reduce the weak generative capacity of PDAs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given an input string w \u2208 \u03a3 * of length n, a configuration is a triple (i, q, \u03b2), where i \u2208 [0, n] is an input position indicating that all symbols up to and including w i have been scanned, q \u2208 Q is a state, and \u03b2 \u2208 \u0393 * is the content of the stack (written bottom to top). For all i, q, r, \u03b2, x, y, we say that", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(i\u22121, q, \u03b2x) yields (i, r, \u03b2y) if \u03b4(q, x w i \u2212 \u2212 \u2192 r, y) > 0. A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "run is a sequence of configurations starting with (0, q 0 , \u22a5) where each configuration (except the last) yields the next configuration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Because our model does not use the PDA to accept or reject strings, we omit the usual definitions for the language accepted by a PDA. This is also why our definition lacks accept states.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As an example, consider the following PDA, for the language {ww R | w \u2208 {0, 1} * }:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "M = (Q, \u03a3, \u0393, \u03b4, q 1 , \u22a5) Q = {q 1 , q 2 } \u03a3 = {0, 1} \u0393 = {0, 1, \u22a5} where \u03b4 contains the transitions q 1 , x a \u2212 \u2192 q 1 , xa x \u2208 \u0393, a \u2208 \u03a3 q 1 , a a \u2212 \u2192 q 2 , a \u2208 \u03a3 q 2 , a a \u2212 \u2192 q 2 , a \u2208 \u03a3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This PDA has a possible configuration with an empty stack (\u22a5) iff the input string read so far is of the form ww R .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To make a weighted PDA probabilistic, we require that all transition weights be nonnegative and, for all a, q, x:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "r\u2208Q y\u2208\u0393 * \u03b4(q, x a \u2212 \u2192 r, y) = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Whereas many definitions make the model generate symbols (Abney et al., 1999) , our definition makes the PDA operations conditional on the input symbol a. The difference is not very important, because the RNN controller will eventually assume responsibility for reading and writing symbols, but our definition makes the shift to an RNN controller below slightly simpler. Lang (1974) gives an algorithm for simulating all runs of a nondeterministic PDA, related to Earley's algorithm (Earley, 1970) . At any point in time, there can be exponentially many possibilities for the contents of the stack. In spite of this, Lang's algorithm is able to represent the set of all possibilities using only quadratic space. As this set is regular, its representation can be thought of as a weighted finite automaton, which we call the stack WFA, similar to the graph-structured stack used in GLR parsing (Tomita, 1987) . Figure 1 depicts Lang's algorithm as a set of inference rules, similar to a deductive parser (Shieber et al., 1995; Goodman, 1999) , although the visual presentation is rather different. Each inference rule is drawn as a fragment of the stack WFA. If the transitions drawn with solid lines are present in the stack WFA, and the side conditions in the right column are met, then the transition drawn with a dashed line can be added to the stack WFA. The algorithm repeatedly applies inference rules to add states and transitions to the stack WFA; no states or transitions are ever deleted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 77, |
|
"text": "(Abney et al., 1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 371, |
|
"end": 382, |
|
"text": "Lang (1974)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 497, |
|
"text": "(Earley, 1970)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 906, |
|
"text": "(Tomita, 1987)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1002, |
|
"end": 1024, |
|
"text": "(Shieber et al., 1995;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1025, |
|
"end": 1039, |
|
"text": "Goodman, 1999)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 909, |
|
"end": 917, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Definition", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Each state of the stack WFA is of the form (i, q, x), where i is a position in the input string, q is a PDA state, and x is the top stack symbol. We briefly explain each of the inference rules:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Axiom creates an initial state and pushes \u22a5 onto the stack.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Push pushes a y on top of an x. Unlike Lang's original algorithm, this inference rule applies whether or not state ( j\u22121, q, x) is reachable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Replace pops a z and pushes a y, by backing up the z transition (without deleting it) and adding a new y transition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Pop pops a z, by backing up the z transition as well as the preceding y transition (without deleting them) and adding a new y transition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The set of accept states of the stack WFA changes from time step to time step; at step j, the accept states are {( j, q, x) | q \u2208 Q, x \u2208 \u0393}. The language recognized by the stack WFA at time j is the set of possible stack contents at time j.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "An example run of the algorithm is shown in Figure 2 , using our example PDA and the string 0110. At time step j = 3, the PDA reads 1 and either pushes a 1 (path ending in state (3, q 1 , 1)) or pops a 1 (path ending in state (3, q 2 , 0)). Similarly at time step j = 4, and the existence of a state with top stack symbol \u22a5 indicates that the string is of the form ww R .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 52, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The total running time of the algorithm is proportional to the number of ways that the inference rules can be instantiated. Since the Pop rule contains three string positions (i, j, and k), the time complexity is O(n 3 ). The total space requirement is characterized by the number of possible WFA transitions. Since transitions connect two states, each with a string position (i and j), the space complexity is O(n 2 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recognition", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "To implement this algorithm in a typical neuralnetwork framework, we reformulate it in terms of tensor operations. We use the assumption that all transitions are scanning, although it would be possible to extend the model to handle non-scanning transitions using matrix inversions (Stolcke, 1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 296, |
|
"text": "(Stolcke, 1995)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Define Act(\u0393) = \u2022\u0393\u222a\u0393\u222a{ } to be a set of possible stack actions: if y \u2208 \u0393, then \u2022y means \"push y,\" y means \"replace with y,\" and means \"pop.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Given an input string w, we pack the transition weights of the PDA into a tensor \u2206 with dimensions n \u00d7 |Q| \u00d7 |\u0393| \u00d7 |Q| \u00d7 |Act(\u0393)|:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "\u2206[ j][q, x \u2192 r, \u2022y] = \u03b4(q, x w j \u2212 \u2212 \u2192 r, xy) \u2206[ j][s, z \u2192 r, y] = \u03b4(s, z w j \u2212 \u2212 \u2192 r, y) \u2206[ j][s, z \u2192 r, ] = \u03b4(s, z w j \u2212 \u2212 \u2192 r, ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "( 1)We compute the transition weights of the stack WFA (except for the initial transition) as a tensor of inner weights \u03b3, with dimensions n \u00d7 n \u00d7 |Q| \u00d7 |\u0393| \u00d7 |Q| \u00d7 |\u0393|. Each element, which we write as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "\u03b3[i \u2212 \u2192 j][q, x \u2212 \u2192 r, y]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": ", is the weight of the stack WFA transition i, q, x j, r, y y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The equations defining \u03b3 are shown in Figure 3 . Because these equations are a recurrence relation, we cannot compute \u03b3 all at once, but (for example) in order of increasing j.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 46, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Additionally, we compute a tensor \u03b1 of forward weights of the stack WFA. This tensor has dimensions n \u00d7 |Q| \u00d7 |\u0393|, and its elements are defined by the recurrence", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "\u03b1[1][r, y] = 1[r = q 0 \u2227 y = \u22a5] \u03b1[ j][r, y] = j\u22121 i=1 q,x \u03b1[i][q, x] \u03b3[i \u2212 \u2192 j][q, x \u2212 \u2192 r, y] (2 \u2264 j \u2264 n).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "The weight \u03b1[ j][r, y] is the total weight of reaching a configuration (r, j, \u03b2y) for any \u03b2 from the initial configuration, and we can use \u03b1 to compute the probability distribution over top stack symbols at time step j:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "\u03c4 ( j) (y) = r \u03b1[ j][r, y] y r \u03b1[ j][r, y ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": ".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inner and Forward Weights", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Now we couple the tensor formulation of Lang's algorithm for nondeterministic PDAs with an RNN controller.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Pushdown Automata", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The controller can be any type of RNN; in our experiments, we used a LSTM RNN. At each time step, it computes a hidden vector h ( j) with d dimensions from the previous hidden vector, an input vector x ( j) , and the distribution over current top stack symbols, \u03c4 ( j) , defined above:", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 206, |
|
"text": "( j)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Axiom 0, q 0 , \u22a5 \u22a5/1 Push j\u22121, q, x j, r, y y/p p = \u03b4(q, x w j \u2212 \u2212 \u2192 r, \u2022y) Replace i, q, x j\u22121, s, z j, r, y z/p 1 y/p 1 p p = \u03b4(s, z w j \u2212 \u2212 \u2192 r, y) Pop i, q, x k, t, y j\u22121, s, z j, r, y y/p 1 z/p 2 y/p 1 p 2 p p = \u03b4(s, z w j \u2212 \u2212 \u2192 r, )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "j = 0 0, q 1 , \u22a5 \u22a5 j = 1 0, q 1 , \u22a5 \u22a5 1, q 1 , 0 0 q 1 , \u22a5 0 \u2212 \u2192 q 1 , 0 j = 2 0, q 1 , \u22a5 \u22a5 1, q 1 , 0 0 2, q 1 , 1 1 q 1 , 0 1 \u2212 \u2192 q 1 , 1 j = 3 0, q 1 , \u22a5 \u22a5 1, q 1 , 0 0 2, q 1 , 1 1 3, q 1 , 1 1 3, q 2 , 0 0 q 1 , 1 1 \u2212 \u2192 q 1 , 1 q 1 , 1 1 \u2212 \u2192 q 2 , j = 4 0, q 1 , \u22a5 \u22a5 1, q 1 , 0 0 2, q 1 , 1 1 3, q 1 , 1 1 3, q 2 , 0 0 4, q 1 , 0 0 4, q 2 , \u22a5 \u22a5 q 1 , 1 0 \u2212 \u2192 q 1 , 0 q 2 , 0 0 \u2212 \u2192 q 2 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1 \u2264 i < j \u2264 n, \u03b3[i \u2212 \u2192 j][q, x \u2212 \u2192 r, y] = 1[i = j\u22121] \u2206[ j][q, x \u2192 r, \u2022y] Push + s,z \u03b3[i \u2212 \u2192 j\u22121][q, x \u2212 \u2192 s, z] \u2206[ j][s, z \u2192 r, y] Replace + j\u22122 k=i+1 t s,z \u03b3[i \u2212 \u2192 k][q, x \u2212 \u2192 t, y] \u03b3[k \u2212 \u2192 j\u22121][t, y \u2212 \u2192 s, z] \u2206[ j][s, z \u2192 r, ] Pop", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "h ( j) = R h ( j\u22121) , x ( j) \u03c4 ( j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where R can be any RNN unit. This state is used to compute an output vector y ( j) as usual:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "y ( j) = softmax Ah ( j) + b", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where A and b are parameters with dimensions |\u03a3| \u00d7 d and |\u03a3|, respectively. In addition, the state is used to compute a conditional distribution over actions, \u2206[ j]:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "z ( j) qxry = exp C qxry: h ( j) + D qxry \u2206[ j][q, x \u2192 r, y] = z ( j) qxry r ,y z ( j) qxr y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where C and D are tensors of parameters with dimensions |Q| \u00d7 |\u0393| \u00d7 |Q| \u00d7 |Act(\u0393)| \u00d7 d and |Q|\u00d7|\u0393|\u00d7|Q|\u00d7|Act(\u0393)|, respectively. (This is just an affine transformation followed by a softmax over r and y.) These equations replace equations (1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We implemented the NS-RNN using PyTorch (Paszke et al., 2019) , and doing so efficiently required a few crucial tricks. The first was a workaround to update the \u03b3 and \u03b1 tensors in-place in a way that was compatible with PyTorch's automatic differentiation; this was necessary to achieve the theoretical quadratic space complexity. The second was an efficient implementation of a differentiable einsum operation 1 that supports the log semiring (as well as other semirings), which allowed us to implement the equations of Figure 3 in 1 https://github.com/bdusell/semiring-einsum a reasonably fast, memory-efficient way that avoids underflow. Our einsum implementation splits the operation into fixed-size blocks where the multiplication and summation of terms can be fully parallelized. This enforces a reasonable upper bound on memory usage while suffering only a slight decrease in speed compared to fully parallelizing the entire einsum operation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 61, |
|
"text": "(Paszke et al., 2019)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section, we describe our experiments comparing our NS-RNN and three baseline language models on several formal languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Marked reversal The language of palindromes with an explicit middle marker, with strings of the form w#w R , where w \u2208 {0, 1} * . This task should be easily solvable by a model with a deterministic stack, as the model can push the string w to the stack, change states upon reading #, and predict w R by popping w from the stack in reverse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Unmarked reversal The language of (evenlength) palindromes without a middle marker, with strings of the form ww R , where w \u2208 {0, 1} * . When the length of w can vary, a language model reading the string from left to right must use nondeterminism to guess where the boundary between w and w R lies. At each position, it must either push the input symbol to the stack, or else guess that the middle point has been reached and start popping symbols from the stack. An optimal language model will interpolate among all possible split points to produce a final prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Padded reversal Like the unmarked reversal language, but with a long stretch of repeated symbols in the middle, with strings of the form wa p w R , where w \u2208 {0, 1} * , a \u2208 {0, 1}, and p \u2265 0. The purpose of the padding is to confuse a language model attempting to guess where the middle of the palindrome is based on the content of the string. In the general case of unmarked reversal, a language model can disregard split points where a valid palindrome does not occur locally. Since all substrings of a p are palindromes, the language model must deal with a larger number of candidates simultaneously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Dyck language The language D 2 of strings with two kinds of balanced brackets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Hardest CFL Designed by Greibach (1973) to be at least as difficult to parse as any other CFL:", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 39, |
|
"text": "Greibach (1973)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "L 0 = {x 1 ,y 1 ,z 1 ; \u2022 \u2022 \u2022 x n ,y n ,z n ; | n \u2265 0, y 1 \u2022 \u2022 \u2022 y n \u2208 $D 2 , x i , z i \u2208 {,, $, (, ), [, ]} * }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Intuitively, L 0 contains strings formed by dividing a member of $D 2 into pieces (y i ) and interleaving them with \"decoy\" pieces (substrings of x i and z i ). While processing the string, the machine has to nondeterministically guess whether each piece is genuine or a decoy. Greibach shows that for any CFL L, there is a string homomorphism h such that a parser for L 0 can be run on h(w) to find a parse for w. See Appendix A for more information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For each task, we construct a probabilistic contextfree grammar (PCFG) for the language (see Appendix B for the full grammars and their parameters). We then randomly sample a training set of 10,000 examples from the PCFG, filtering samples so that the length of a string is in the interval [40, 80] (see Appendix C for our sampling method). The training set remains the same throughout the training process and is not re-sampled from epoch to epoch, since we want to test how well the model can infer the probability distribution from a finite sample.", |
|
"cite_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 294, |
|
"text": "[40,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 298, |
|
"text": "80]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We sample a validation set of 1,000 examples from the same distribution and a test set with string lengths varying from 40 to 100, with 100 examples per length. The validation set is randomized in each experiment, but for each task, the test set remains the same across all models and random restarts. For simplicity, we do not filter training samples from the validation or test sets, assuming that the chance of overlap is very small.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Since, in these languages, the next symbol cannot always be predicted deterministically from previous symbols, we do not use prediction accuracy as in previous work. Instead, we compute per-symbol cross-entropy on a set of strings S . Let p be any distribution over strings; then:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "H(S , p) = w\u2208S \u2212 log p(s) w\u2208S |w| .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We compute the cross-entropy for both the stack RNN and the distribution from which S is sampled and report the difference. This can be seen as an approximation of the KL divergence of the stack RNN from the true distribution. Technically, because the RNN models do not predict the end of the string, they estimate p(w | |w|), not p(w). However, they do not actually use any knowledge of the length, so it seems reasonable to compare the RNN's estimate of p(w | |w|) with the true p(w). (This is why, when we bin by length in Figure 5 , some of the differences are negative.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 526, |
|
"end": 534, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "A benefit of using cross-entropy instead of prediction accuracy is that we can easily incorporate new tasks as long as they are expressed as a PCFG. We do not, for example, need to define a languagedependent subsequence of symbols to evaluate on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We compare our NS-RNN against three baselines: an LSTM, the Stack LSTM of Joulin and Mikolov (2015) (\"JM\"), and the Stack LSTM of Grefenstette et al. (2015) (\"Gref\"). We deviate slightly from the original definitions of these models in order to standardize the controller-stack interface to the one defined in Section 4.1, and to isolate the effects of differences in the stack data structure, rather than the controller mechanism. For all three stack models, we use an LSTM controller whose initial hidden state is fixed to 0, and we use only one stack for the JM and Gref models. (In early experiments, we found that using multiple stacks did not make a meaningful difference in performance.) For JM, we include a bias term in the layers that compute the stack actions and network output. We do allow the no-op operation, and the stack reading consists of only the top stack cell. For Gref, we set the controller output o t equal to the hidden state h t , so we compute the stack actions, pushed vector, and network output directly from the hidden state. We encode all input symbols as one-hot vectors; there are no embedding layers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "For all models, we use a single-layer LSTM with 20 hidden units. We selected this number because we found that an LSTM of this size could not completely solve the marked reversal task, indicating that the hidden state is a memory bottleneck. For each task, we perform a hyperparameter grid search for each model. We search for the initial learning rate, which has a large impact on performance, from the set {0.01, 0.005, 0.001, 0.0005}. For JM and Gref, we search for stack embedding sizes in {2, 20, 40}. We manually choose a small number of PDA states and stack symbol types for the NS-RNN for each task. For marked reversal, unmarked reversal, and Dyck, we use 2 states and 2 stack symbol types. For padded reversal, we use 3 states and 2 stack symbol types. For the hardest CFL, we use 3 states and 3 stack symbol types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "As noted by Grefenstette et al. (2015) , initialization can play a large role in whether a Stack LSTM converges on algorithmic behavior or becomes trapped in a local optimum. To mitigate this, for each hyperparameter setting in the grid search, we run five random restarts and select the hyperparameter setting with the lowest average difference in cross entropy on the validation set. This gives us a picture not only of the model's performance, but of its rate of success. We initialize all fullyconnected layers except for the recurrent LSTM layer with Xavier uniform initialization (Glorot and Bengio, 2010) , and all other parameters uniformly from [\u22120.1, 0.1].", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 38, |
|
"text": "Grefenstette et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 611, |
|
"text": "(Glorot and Bengio, 2010)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "We train all models with Adam (Kingma and Ba, 2015) and clip gradients whose magnitude is above 5. We use mini-batches of size 10; to generate a batch, we first select a length and then sample 10 strings of that length. We train models until convergence, multiplying the learning rate by 0.9 after 5 epochs of no improvement in cross-entropy on the validation set, and stopping after 10 epochs of no improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hyperparameters", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "We show plots of the difference in cross entropy on the validation set between each model and the source distribution in Figure 4 . For all tasks, stackbased models outperform the LSTM baseline, indicating that the tasks are effective benchmarks for differentiable stacks. For the marked reversal, unmarked reversal, and hardest CFL tasks, our model consistently achieves cross-entropy closer to the source distribution than any other model. Even for the marked reversal task, which can be solved deterministically, the NS-RNN, besides achieving lower cross-entropy on average, learns to solve the task in fewer updates and with much higher reliability across random restarts. In the case of the mildly nondeterministic unmarked reversal and highly nondeterministic hardest CFL tasks, the NS-RNN converges on the lowest validation crossentropy. On the Dyck language, which is a deterministic task, all stack models converge quickly on the source distribution. We hypothesize that this is because the Dyck language represents a case where stack usage is locally advantageous everywhere, so it is particularly conducive for learning stack-like behavior. On the other hand, we note that our model struggles on padded reversal, in which stack-friendly signals are intentionally made very distant. Although the NS-RNN outperforms the LSTM baseline, the JM model solves the task most effectively, though still imperfectly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 129, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In order to show how each model performs when evaluated on strings longer than those seen during training, in Figure 5 , we show cross-entropy on separately sampled test data as a function of string length. All test sets are identical across models and random restarts, and there are 100 samples per length. The NS-RNN consistently does well on string lengths it was trained on, but it is sometimes surpassed by other stack models on strings that are outside the distribution of lengths it was trained on. This suggests that the NS-RNN conforms more tightly to the real distribution seen during training.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 118, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We presented the NS-RNN, a neural language model with a differentiable stack that explicitly models nondeterminism. We showed that it offers improved trainability and modeling power over previous stack-based neural language models; the NS-RNN learns to solve some deterministic tasks more effectively than other stack-LSTMs, and achieves the best results on a challenging nondeterministic context-free language. However, we note that the NS-RNN struggled on a task where signals in the data were distant, and did not generalize to longer lengths as well as other stack-LSTMs; we hope to address these shortcomings in future work. We believe that the NS-RNN will prove to be a powerful tool for learning and modeling ambiguous syntax in natural language. Figure 5 : Cross-entropy difference in nats on the test set, binned by string length. Some models achieve a negative difference, for reasons explained in \u00a75.3. Each line is the average of the same five random restarts shown in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 754, |
|
"end": 762, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 981, |
|
"end": 989, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported in part by a Google Faculty Research Award. We would like to thank Justin DeBenedetto and Darcey Riley for their helpful comments, and the Center for Research Computing at the University of Notre Dame for providing the computing infrastructure for our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A The Hardest CFL Greibach (1973) describes a CFL, L 0 , which is the \"hardest\" CFL in the sense that an efficient parser for L 0 is also an efficient parser for any other CFL L. It is defined as follows. (We deviate from Greibach's original notation for the sake of clarity.) Every string in L 0 is of the following form:that is, a sequence of strings \u03b1 i , each terminated by ;. No \u03b1 i can contain ;. Each \u03b1 i , in turn, is divided into three parts, separated by commas:The middle part, y i , is a substring of a string in D 2 . The brackets in y i do not need to be balanced, but all of the y i 's concatenated must form a string in D 2 , prefixed by $. The catch is that x i and z i can be any sequence of bracket, comma, and $ symbols, so it is impossible to tell, in a single \u03b1 i , where y i begins and ends. A parser must nondeterministically guess where each y i is, and cannot verify a guess until the end of the string is reached.The design of L 0 is justified as follows. Suppose we have a parser for L 0 which, as part of its output, identifies the start and end of each y i . Given a CFG G in Greibach normal form (GNF), we can adapt the parser for L 0 to parse L(G) by constructing a string homomorphism h, such that w \u2208 L(G) iff h(w) \u2208 L 0 , and the concatenated y i 's encode a leftmost derivation of w under G.The homomorphism h always exists and can be constructed from G as follows. Let the nonterminals of G be V = {A 1 , . . . , A |V| }. Recall that in GNF, every rule is of the form A i \u2192 aA j 1 \u2022 \u2022 \u2022 A j m and S does not appear on any right-hand side. DefineWe encode each rule of G asFinally, we can define h aswhere , concatenates strings together delimited by commas. Then there is a valid string of y i 's iff there is a valid derivation of w with respect to G.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 33, |
|
"text": "Greibach (1973)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We list here the production rules and weights for the PCFG used for each of our tasks. Let f (\u00b5) = 1\u2212 1 \u00b5+1 , which is the probability of failure associated with a negative binomial distribution with a mean of \u00b5 failures before one success. For a recursive PCFG rule, a probability of f (\u00b5) results in an average of \u00b5 applications of the recursive rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B PCFGs for Generating Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We set \u00b5 = 60.We set \u00b5 = 60.Let \u00b5 c be the mean length of the reversed content, and let \u00b5 p be the mean padding length. We set \u00b5 c = 60 and \u00b5 p = 30.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.1 Marked reversal", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Let \u00b5 s be the mean number of splits, and let \u00b5 n be the mean nesting depth. We set \u00b5 s = 1 and \u00b5 n = 40.Let \u00b5 c be the mean number of commas, \u00b5 s f be the mean short filler length, \u00b5 l f be the mean long filler length, p s be the probability of a semicolon, \u00b5 s be the mean number of bracket splits, and \u00b5 n be the mean bracket nesting depth. We set \u00b5 c = 0.5, \u00b5 s f = 0.5, \u00b5 l f = 2, p s = 0.25, \u00b5 s = 1.5, and \u00b5 n = 3.C Sampling Strings with Fixed Length from a PCFG For practical reasons, we restrict strings we sample from PCFGs to those whose lengths lie within a certain interval, say [ min , max ]. The lengths of strings sampled randomly from PCFGs tend to have high variance, and we often want data sets to consist of strings of a certain length (e.g. longer strings in the test set than in the training set).To do this, we first sample a length uniformly from [ min , max ]. Then we use an efficient dynamic programming algorithm to sample strings directly from the distribution of strings in the PCFG with length . This algorithm is adapted from an algorithm presented by Aguinaga et al. (2019) for sampling graphs of a specific size from a hyperedge replacement grammar.The algorithm operates in two phases. The first (Algorithm 1) computes a table T such that every entry T [A, ] contains the total probability of sampling a string from the PCFG with length . The second (Algorithm 2) uses T to randomly sample a string from the PCFG (using S as the nonterminal parameter X), restricted to those with a length of exactly .Let nonterminals(\u03b2) be an ordered sequence consisting of the nonterminals in \u03b2. Let Compositions( , n) be a function that returns a (possibly empty) list of all compositions of that are of length n (that is, all ordered sequences of n positive integers that add up to ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1084, |
|
"end": 1106, |
|
"text": "Aguinaga et al. (2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.4 Dyck language", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Require: G has no -rules or unary rules 1: function ComputeWeights(G, T, X, )2:for C in Compositions( , |N|) doreturn t 8: function ComputeTable(G, n)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 1 Computing the probability table T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for from 1 to n do 10: for all nonterminals X do 11:Because this algorithm only works on PCFGs that are free of -rules and unary rules, we automatically refactor our PCFGs to remove them before providing them to the algorithm.Some of our PCFGs do not generate any strings for certain lengths, which is detected at line 3 of Algorithm 2. In this case, we restart the sampling procedure from the beginning. This means that the distribution we are effectively sampling from is as follows. Let G(w) be the probability of w under PCFG G, and let G( ) be the probability of all strings of length , that is,Algorithm 2 Sampling a string using T Require: T is the output of ComputeTable (G, ) 1:for j from 1 to |\u03b2| do 9:if \u03b2 j is a terminal then 10:append \u03b2 j to s 11:append s to s 14:return sThen the distribution we are sampling from is.When computing the lower-bound cross-entropy of the validation and test sets, we must compute p sample (w) for each string w. Finding G(w) requires re-parsing w with respect to G and summing the probabilities of all valid parses using the Inside algorithm. We can look up the value of G(|w|) in the table entry T [S , |w|] produced in the sampling algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 679, |
|
"end": 684, |
|
"text": "(G, )", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "9:", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Relating probabilistic grammars and automata", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcallester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "542--549", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1034678.1034759" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Abney, David McAllester, and Fernando Pereira. 1999. Relating probabilistic grammars and automata. In Proc. ACL, pages 542-549.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning hyperedge replacement grammars for graph generation", |
|
"authors": [ |
|
{ |
|
"first": "Salvador", |
|
"middle": [], |
|
"last": "Aguinaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE Trans. Pattern Analysis and Machine Intelligence", |
|
"volume": "41", |
|
"issue": "3", |
|
"pages": "625--638", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TPAMI.2018.2810877" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Salvador Aguinaga, David Chiang, and Tim Weninger. 2019. Learning hyperedge replacement grammars for graph generation. IEEE Trans. Pattern Analysis and Machine Intelligence, 41(3):625-638.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Context-free languages and pushdown automata", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Michel", |
|
"middle": [], |
|
"last": "Autebert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Berstel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luc", |
|
"middle": [], |
|
"last": "Boasson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Grzegorz Rozenberg and Arto Salomaa", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--174", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-642-59136-5_3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Michel Autebert, Jean Berstel, and Luc Boasson. 1997. Context-free languages and pushdown au- tomata. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, pages 111- 174. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An efficient context-free parsing algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Earley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Comm. ACM", |
|
"volume": "13", |
|
"issue": "2", |
|
"pages": "94--102", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/362007.362035" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jay Earley. 1970. An efficient context-free parsing al- gorithm. Comm. ACM, 13(2):94-102.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Morita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. NAACL HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--42", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1004" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Proc. NAACL HLT, pages 32-42.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Understanding the difficulty of training deep feedforward neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Glorot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proc. AISTATS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. In Proc. AISTATS, pages 249-256.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semiring parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Computational Linguistics", |
|
"volume": "25", |
|
"issue": "4", |
|
"pages": "573--605", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joshua Goodman. 1999. Semiring parsing. Computa- tional Linguistics, 25(4):573-605.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning to transduce with unbounded memory", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. NeurIPS", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1828--1836", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded memory. In Proc. NeurIPS, volume 2, pages 1828-1836.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The hardest context-free language", |
|
"authors": [ |
|
{ |
|
"first": "Sheila", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Greibach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "SIAM J. Comput", |
|
"volume": "2", |
|
"issue": "4", |
|
"pages": "304--310", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1137/0202025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sheila A. Greibach. 1973. The hardest context-free lan- guage. SIAM J. Comput., 2(4):304-310.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Context-free transductions with neural stacks", |
|
"authors": [ |
|
{ |
|
"first": "Yiding", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Merrill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Angluin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Amsel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Benz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Mendelsohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc. BlackboxNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "306--315", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W18-5433" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, and Simon Mendelsohn. 2018. Context-free transductions with neural stacks. In Proc. BlackboxNLP, pages 306- 315.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A systematic assessment of syntactic generalization in neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [], |
|
"last": "Gauthier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proc. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1725--1744", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language models. In Proc. ACL, pages 1725-1744.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Inferring algorithmic patterns with stack-augmented recurrent nets", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. NeurIPS", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "190--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proc. NeurIPS, volume 1, pages 190-198.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [ |
|
"Lei" |
|
], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proc. ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Deterministic techniques for efficient non-deterministic parsers", |
|
"authors": [ |
|
{ |
|
"first": "Bernard", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Proc. Colloquium on Automata, Languages, and Programming", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "255--269", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-662-21545-6_18" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernard Lang. 1974. Deterministic techniques for ef- ficient non-deterministic parsers. In Proc. Collo- quium on Automata, Languages, and Programming, pages 255-269.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Expectation-based syntactic comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Cognition", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "1126--77", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.cognition.2007.05.006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106:1126-77.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequenceto-sequence networks", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Trans. ACL", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "125--140", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00304" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard McCoy, Robert H. Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence- to-sequence networks. Trans. ACL, 8:125-140.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "PyTorch: An imperative style, high-performance deep learning library", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Massa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Killeen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Gimelshein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8024--8035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learn- ing library. In Proc. NeurIPS, pages 8024-8035.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Quantity doesn't buy quality syntax with neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Marten Van Schijndel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Mueller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. EMNLP-IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5831--5837", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1592" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proc. EMNLP-IJCNLP, pages 5831-5837.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Principles and implementation of deductive parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Stuart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Shieber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Schabes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of Logic Programming", |
|
"volume": "24", |
|
"issue": "1", |
|
"pages": "3--36", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/0743-1066(95)00035-I" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1):3-36.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computational Linguistics", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "165--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke. 1995. An efficient probabilis- tic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The neural network pushdown automaton: Model, stack, and learning simulations", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lee" |
|
], |
|
"last": "Giles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Z. Sun, C. Lee Giles, H. H. Chen, and Y. C. Lee. 1995. The neural network pushdown automaton: Model, stack, and learning simulations. Technical Report UMIACS-TR-93-77 and CS-TR-3118, Uni- versity of Maryland. Revised version.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Memory-augmented recurrent neural networks can learn generalized Dyck languages", |
|
"authors": [ |
|
{ |
|
"first": "Mirac", |
|
"middle": [], |
|
"last": "Suzgun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1922.03329" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019. Memory-augmented recurrent neural networks can learn generalized Dyck languages. arXiv:1922.03329.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "An efficient augmented contextfree parsing algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Masaru", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Computational Linguistics", |
|
"volume": "13", |
|
"issue": "1-2", |
|
"pages": "31--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masaru Tomita. 1987. An efficient augmented context- free parsing algorithm. Computational Linguistics, 13(1-2):31-46.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Hierarchical representation in neural language models: Suppression and recovery of expectations", |
|
"authors": [ |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proc. BlackboxNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--190", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4819" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. Hierarchical representation in neural language mod- els: Suppression and recovery of expectations. In Proc. BlackboxNLP, pages 181-190.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Memory architectures in recurrent neural network language models", |
|
"authors": [ |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yishu", |
|
"middle": [], |
|
"last": "Miao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proc. ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dani Yogatama, Yishu Miao, G\u00e1bor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural net- work language models. In Proc. ICLR.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Lang's algorithm drawn as operations on the stack WFA. Solid edges indicate existing transitions; dashed edges indicate transitions that are added as a result of the stack operation.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Run of Lang's algorithm on our example PDA and the string 0110. The PDA transitions used are shown at right.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "Equations for computing inner weights.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "Cross-entropy difference in nats between model and source distribution on validation set, as a function of training time. Lines are averages of five random restarts, and shaded regions are standard deviations. After a random restart converges, the value of its last epoch is used in the average for later epochs.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
} |
|
} |
|
} |
|
} |