Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:04:19.806808Z"
},
"title": "An Empirical Exploration of Skip Connections for Sequential Tagging",
"authors": [
{
"first": "Huijia",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": {
"country": "CAS"
}
},
"email": "[email protected]"
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": {
"country": "CAS"
}
},
"email": "[email protected]"
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "",
"location": {
"country": "CAS"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-ofthe-art results on CCG supertagging and comparable results on POS tagging.",
"pdf_parse": {
"paper_id": "C16-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-ofthe-art results on CCG supertagging and comparable results on POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In natural language processing, sequential tagging mainly refers to the tasks of assigning discrete labels to each token in a sequence. Typical examples include part-of-speech (POS) tagging and combinatory category grammar (CCG) supertagging. A regular feature of sequential tagging is that the input tokens in a sequence cannot be assumed to be independent since the same token in different contexts can be assigned to different tags. Therefore, the classifier should have memories to remember the contexts to make a correct prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bidirectional LSTMs (Graves and Schmidhuber, 2005) become dominant in sequential tagging problems due to the superior performance (Wang et al., 2015; Vaswani et al., 2016; Lample et al., 2016) . The horizontal hierarchy of LSTMs with bidirectional processing can remember the long-range dependencies without affecting the short-term storage. Although the models have a deep horizontal hierarchy (the depth is the same as the sequence length), the vertical hierarchy is often shallow, which may not be efficient at representing each token. Stacked LSTMs are deep in both directions, but become harder to train due to the feed-forward structure of stacked layers.",
"cite_spans": [
{
"start": 20,
"end": 50,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF6"
},
{
"start": 130,
"end": 149,
"text": "(Wang et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 150,
"end": 171,
"text": "Vaswani et al., 2016;",
"ref_id": "BIBREF32"
},
{
"start": 172,
"end": 192,
"text": "Lample et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Skip connections (or shortcut connections) enable unimpeded information flow by adding direct connections across different layers (Raiko et al., 2012; Graves, 2013; Hermans and Schrauwen, 2013) . However, there is a lack of exploration and analyzing various kinds of skip connections in stacked LSTMs. There are two issues to handle skip connections in stacked LSTMs: One is where to add the skip connections, the other is what kind of skip connections should be used to pass the information. To answer the first question, we empirically analyze three positions of LSTM blocks to receive the previous layer's output. For the second one, we present an identity mapping to receive the previous layer's outputs. Furthermore, following the gate design of LSTM (Hochreiter and Schmidhuber, 1997; Gers et al., 2000) and highway networks (Srivastava et al., 2015a; Srivastava et al., 2015b) , we observe that adding a multiplicative gate to the identity function will help to improve performance.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Raiko et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 151,
"end": 164,
"text": "Graves, 2013;",
"ref_id": "BIBREF7"
},
{
"start": 165,
"end": 193,
"text": "Hermans and Schrauwen, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 756,
"end": 790,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF10"
},
{
"start": 791,
"end": 809,
"text": "Gers et al., 2000)",
"ref_id": "BIBREF4"
},
{
"start": 831,
"end": 857,
"text": "(Srivastava et al., 2015a;",
"ref_id": "BIBREF27"
},
{
"start": 858,
"end": 883,
"text": "Srivastava et al., 2015b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a neural architecture for sequential tagging. The input of the network are token representations. We concatenate word embeddings to character embeddings to represent the word and morphemes. A deep stacked bidirectional LSTM with well-designed skip connections is then used to extract the features needed for classification from the inputs. The output layer uses softmax function to output the tag distribution for each token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contribution is that we empirically evaluated the effects of various kinds of skip connections within stacked LSTMs. We present comprehensive experiments on the supertagging task showing that skip connections to the cell outputs using identity function multiplied with an exclusive gate can help to improve the network performance. Our model is evaluated on two sequential tagging tasks, obtaining state-of-the-art results on CCG supertagging and comparable results on POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Skip connections have been widely used for training deep neural networks. For recurrent neural networks, Schmidhuber (1992) ; El Hihi and Bengio (1995) introduced deep RNNs by stacking hidden layers on top of each other. Raiko et al. (2012) ; Graves (2013) ; Hermans and Schrauwen (2013) proposed the use of skip connections in stacked RNNs. However, the researchers have paid less attention to the analyzing of various kinds of skip connections, which is our focus in this paper.",
"cite_spans": [
{
"start": 105,
"end": 123,
"text": "Schmidhuber (1992)",
"ref_id": "BIBREF23"
},
{
"start": 129,
"end": 151,
"text": "Hihi and Bengio (1995)",
"ref_id": "BIBREF2"
},
{
"start": 221,
"end": 240,
"text": "Raiko et al. (2012)",
"ref_id": "BIBREF21"
},
{
"start": 243,
"end": 256,
"text": "Graves (2013)",
"ref_id": "BIBREF7"
},
{
"start": 259,
"end": 287,
"text": "Hermans and Schrauwen (2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The works closely related to ours are Srivastava et al. (2015b) , , Kalchbrenner et al. (2015) , , Zilly et al. (2016) . These works are all based on the design of extra connections between different layers. Srivastava et al. (2015b) and mainly focus on feed-forward neural network, using well-designed skip connections across different layers to make the information pass more easily. The Grid LSTM proposed by Kalchbrenner et al. (2015) extends the one dimensional LSTMs to many dimensional LSTMs, which provides a more general framework to construct deep LSTMs. and propose highway LSTMs by introducing gated direct connections between internal states in adjacent layers and do not use skip connections, while we propose gated skip connections across cell outputs. Zilly et al. 2016introduce recurrent highway networks (RHN) which use a single recurrent layer to make RNN deep in a vertical direction, in contrast to our stacked models.",
"cite_spans": [
{
"start": 38,
"end": 63,
"text": "Srivastava et al. (2015b)",
"ref_id": null
},
{
"start": 68,
"end": 94,
"text": "Kalchbrenner et al. (2015)",
"ref_id": "BIBREF12"
},
{
"start": 99,
"end": 118,
"text": "Zilly et al. (2016)",
"ref_id": null
},
{
"start": 208,
"end": 233,
"text": "Srivastava et al. (2015b)",
"ref_id": null
},
{
"start": 412,
"end": 438,
"text": "Kalchbrenner et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Consider a recurrent neural network applied to sequential tagging: Given a sequence x = (x 1 , . . . , x T ), the RNN computes the hidden state h = (h 1 , . . . , h T ) and the output y = (y 1 , . . . , y T ) by iterating the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h t = f (x t , h t\u22121 ; \u03b8 h ) (1) y t = g(h t ; \u03b8 o )",
"eq_num": "(2)"
}
],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "where t \u2208 {1, . . . , T } represents the time. x t represents the input at time t, h t\u22121 and h t are the previous and the current hidden state, respectively. f and g are the transition function and the output function, respectively. \u03b8 h and \u03b8 o are network parameters. We use a negative log-likelihood cost to evaluate the performance, which can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = \u2212 1 N N n=1 log y t n",
"eq_num": "(3)"
}
],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "where t n \u2208 N is the true target for sample n, and y t n is the t-th output in the softmax layer given the inputs x n . The core idea of Long Short-Term Memory networks is to replace (1) with the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = f (x t , h t\u22121 ) + c t\u22121",
"eq_num": "(4)"
}
],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "where c t is the internal state of the memory cell, which is designed to store the information for much longer time. Besides this, LSTM uses gates to avoid weight update conflicts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "Standard LSTMs process sequences in temporal order, which will ignore future context. Bidirectional LSTMs solve this problem by combining both the forward and the backward processing of the input sequences using two separate recurrent hidden layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h t = LSTM( \u2212 \u2192 x t , \u2212 \u2212 \u2192 h t\u22121 , \u2212 \u2212 \u2192 c t\u22121 ) (5) \u2190 \u2212 h t = LSTM( \u2190 \u2212 x t , \u2190 \u2212 \u2212 h t\u22121 , \u2190 \u2212 \u2212 c t\u22121 ) (6) y t = g( \u2212 \u2192 h t , \u2190 \u2212 h t )",
"eq_num": "(7)"
}
],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "where LSTM(\u2022) is the LSTM computation. \u2212 \u2192 x t and \u2190 \u2212 x t are the forward and the backward input sequence, respectively. The output of the two hidden layers \u2212 \u2192 h t and \u2190 \u2212 h t in a birectional LSTM are connected to the output layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "Stacked RNN is one type of deep RNNs, which refers to the hidden layers are stacked on top of each other, each feeding up to the layer above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h l t = f l (h l\u22121 t , h l t\u22121 )",
"eq_num": "(8)"
}
],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "where h l t is the t-th hidden state of the l-th layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks for Sequential Tagging",
"sec_num": "3"
},
{
"text": "Skip connections in simple RNNs are trivial since there is only one position to connect to the hidden units. But for stacked LSTMs, the skip connections need to be carefully treated to train the network successfully.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "In this section, we analyze and compare various types of skip connections. At first, we give a detailed definition of stacked LSTMs, which can help us to describe skip connections. Then we start our construction of skip connections in stacked LSTMs. At last, we formulate various kinds of skip connections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Stacked LSTMs without skip connections can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8eb \uf8ec \uf8ec \uf8ed i l t f l t o l t s l t \uf8f6 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ec \uf8ed sigm sigm sigm tanh \uf8f6 \uf8f7 \uf8f7 \uf8f8 W l h l\u22121 t h l t\u22121 c l t = f l t c l t\u22121 + i l t s l t h l t = o l t tanh(c l t )",
"eq_num": "(9)"
}
],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "During forward pass, LSTM needs to calculate c l t and h l t , which is the cell's internal state and the cell outputs state, respectively. To get c l t , s l t needs to be computed to store the current input. Then this result is multiplied by the input gate i l t , which decides when to keep or override information in memory cell c l t . The cell is designed to store the previous information c l t\u22121 , which can be reset by a forget gate f l t . The new cell state is then obtained by adding the result to the current input. The cell outputs h l t are computed by multiplying the activated cell state by the output gate o l t , which learns when to access memory cell and when to block it. \"sigm\" and \"tanh\" are the sigmoid and tanh activation function, respectively. W l \u2208 R 4n\u00d72n is the weight matrix needs to be learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "The hidden units in stacked LSTMs have two forms. One is the hidden units in the same layer {h l t , t \u2208 1, . . . , T }, which are connected through an LSTM. The other is the hidden units at the same time step {h l t , l \u2208 1, . . . , L}, which are connected through a feed-forward network. LSTM can keep the short-term memory for a long time, thus the error signals can be easily passed through {1, . . . , T }. However, when the number of stacked layers is large, the feed-forward network will suffer the gradient vanishing/exploding problems, which make the gradients hard to pass through {1, . . . , L}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "The core idea of LSTM is to use an identity function to make the constant error carrosel. also use an identity mapping to train a very deep convolution neural network with improved performance. All these inspired us to use an identity function for the skip connections. Rather, the gates of LSTM are essential parts to avoid weight update conflicts, which are also invoked by skip connections. Following highway gating, we use a gate multiplied with identity mapping to avoid the conflicts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections are cross-layer connections, which means that the output of layer l\u22122 is not only connected to the layer l\u22121, but also connected to layer l. For stacked LSTMs, h l\u22122 t can be connected to the gates, the internal states, and the cell outputs in layer l's LSTM blocks. We formalize these below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections to the gates. We can connect h l\u22122 t to the gates through an identity mapping:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "\uf8eb \uf8ec \uf8ec \uf8ed i l t f l t o l t s l t \uf8f6 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ec \uf8ed sigm sigm sigm tanh \uf8f6 \uf8f7 \uf8f7 \uf8f8 W l I l \uf8eb \uf8ed h l\u22121 t h l t\u22121 h l\u22122 t \uf8f6 \uf8f8 (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "where I l \u2208 R 4n\u00d7n is the identity mapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections to the internal states. Another kind of skip connections is to connect h l\u22122 t to the cell's internal state c l t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c l t = f l t c l t\u22121 + i l t s l t + h l\u22122 t (11) h l t = o l t tanh(c l t )",
"eq_num": "(12)"
}
],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections to the cell outputs. We can also connect h l\u22122 t to cell outputs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c l t = f l t c l t\u22121 + i l t s l t (13) h l t = o l t tanh(c l t ) + h l\u22122 t",
"eq_num": "(14)"
}
],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections using gates. Consider the case of skip connections to the cell outputs. The cell outputs grow linearly during the presentation of network depth, which makes the h l t 's derivative vanish and hard to convergence. Inspired by the introduction of LSTM gates, we add a gate to control the skip connections through retrieving or blocking them:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed i l t f l t o l t g l t s l t \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed sigm sigm sigm sigm tanh \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 W l h l\u22121 t h l t\u22121 c l t = f l t c l t\u22121 + i l t s l t h l t = o l t tanh(c l t ) + g l t h l\u22122 t",
"eq_num": "(15)"
}
],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "where g l t is the gate which can be used to access the skipped output h l\u22122 t or block it. When g l t equals 0, no skipped output can be passed through skip connections, which is equivalent to traditional stacked LSTMs. Otherwise, it behaves like a feed-forward LSTM using gated identity connections. Here we omit the case of adding gates to skip connections to the internal state, which is similar to the above case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Skip connections in bidirectional LSTM. Using skip connections in bidirectional LSTM is similar to the one used in LSTM, with a bidirectional processing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 c l t = \u2212 \u2192 f \u2212 \u2212 \u2192 c l t\u22121 + \u2212 \u2192 i \u2212 \u2192 s l t \u2212 \u2192 h l t = \u2212 \u2192 o tanh( \u2212 \u2192 c l t ) + \u2212 \u2192 g \u2212 \u2212 \u2192 h l\u22122 t \u2190 \u2212 c l t = \u2190 \u2212 f \u2190 \u2212 \u2212 c l t\u22121 + \u2190 \u2212 i \u2190 \u2212 s l t \u2190 \u2212 h l t = \u2190 \u2212 o tanh( \u2190 \u2212 c l t ) + \u2190 \u2212 g \u2190 \u2212 \u2212 h l\u22122 t",
"eq_num": "(16)"
}
],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "5 Neural Architecture for Sequential Tagging",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Sequential tagging can be formulated as P (t|w; \u03b8), where w = [w 1 , . . . , w T ] indicates the T words in a sentence, and t = [t 1 , . . . , t T ] indicates the corresponding T tags. In this section we introduce an neural architecture for P (\u2022), which includes an input layer, a stacked hidden layers and an output layer. Since the stacked hidden layers have already been introduced, we only introduce the input and the output layer here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various kinds of Skip Connections",
"sec_num": "4"
},
{
"text": "Network inputs are the representation of each token in a sequence. There are many kinds of token representations, such as using a single word embedding, using a local window approach, or a combination of word and character-level representation. Here our inputs contain the concatenation of word representations, character representations, and capitalization representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Inputs",
"sec_num": "5.1"
},
{
"text": "Word representations. All words in the vocabulary share a common look-up table, which is initialized with random initializations or pre-trained embeddings. Each word in a sentence can be mapped to an embedding vector w t . The whole sentence is then represented by a matrix with columns vector [w 1 , w 2 , . . . , w T ]. We use a context window of size d surrounding with a word w t to get its context information. Following Wu et al. (2016) , we add logistic gates to each token in the context window. The word representation is computed as w t = [r t\u2212 d/2 w t\u2212 d/2 ; . . . ; r t+ d/2 w t+ d/2 ], where r t := [r t\u2212 d/2 , . . . , r t+ d/2 ] \u2208 R d is a logistic gate to filter the unnecessary contexts, w t\u2212 d/2 , . . . , w t+ d/2 is the word embeddings in the local window.",
"cite_spans": [
{
"start": 426,
"end": 442,
"text": "Wu et al. (2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Network Inputs",
"sec_num": "5.1"
},
{
"text": "Character representations. Prefix and suffix information about words are important features in sequential tagging. Inspired by Fonseca et al. (2015) et al, which uses a character prefix and suffix with length from 1 to 5 for part-of-speech tagging, we concatenate character embeddings in a word to get the character-level representation. Concretely, given a word w consisting of a sequence of characters [c 1 , c 2 , . . . , c lw ], where l w is the length of the word and L(\u2022) is the look-up table for characters. We concatenate the leftmost most 5 character embeddings L(c 1 ), . . . , L(c 5 ) with its rightmost 5 character embeddings L(c lw\u22124 ), . . . , L(c lw ). When a word is less than five characters, we pad the remaining characters with the same special symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Inputs",
"sec_num": "5.1"
},
{
"text": "Capitalization representations. We lowercase the words to decrease the size of word vocabulary to reduce sparsity, but we need an extra capitalization embeddings to store the capitalization features, which represent whether or not a word is capitalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Inputs",
"sec_num": "5.1"
},
{
"text": "For sequential tagging, we use a softmax activation function g(\u2022) in the output layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Outputs",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = g(W hy [ \u2212 \u2192 h t ; \u2190 \u2212 h t ])",
"eq_num": "(17)"
}
],
"section": "Network Outputs",
"sec_num": "5.2"
},
{
"text": "where y t is a probability distribution over all possible tags. y k (t) = exp(h k ) k exp(h k ) is the k-th dimension of y t , which corresponds to the k-th tags in the tag set. W hy is the hidden-to-output weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Outputs",
"sec_num": "5.2"
},
{
"text": "Combinatory Category Grammar (CCG) supertagging is a sequential tagging problem in natural language processing. The task is to assign supertags to each word in a sentence. In CCG the supertags stand for the lexical categories, which are composed of the basic categories such as N , N P and P P , and complex categories, which are the combination of the basic categories based on a set of rules. Detailed explanations of CCG refers to (Steedman, 2000; Steedman and Baldridge, 2011) .",
"cite_spans": [
{
"start": 434,
"end": 450,
"text": "(Steedman, 2000;",
"ref_id": "BIBREF30"
},
{
"start": 451,
"end": 480,
"text": "Steedman and Baldridge, 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Category Grammar Supertagging",
"sec_num": "6.1"
},
{
"text": "The training set of this task only contains 39604 sentences, which is too small to train a deep model, and may cause over-parametrization. But we choose it since it has been already proved that a bidirectional recurrent net fits the task by many authors (Lewis et al., 2016; Vaswani et al., 2016) .",
"cite_spans": [
{
"start": 254,
"end": 274,
"text": "(Lewis et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 275,
"end": 296,
"text": "Vaswani et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Category Grammar Supertagging",
"sec_num": "6.1"
},
{
"text": "Our experiments are performed on CCGBank (Hockenmaier and Steedman, 2007) , which is a translation from Penn Treebank (Marcus et al., 1993) to CCG with a coverage 99.4%. We follow the standard splits, using sections 02-21 for training, section 00 for development and section 23 for the test. We use a full category set containing 1285 tags. All digits are mapped into the same digit '9', and all words are lowercased.",
"cite_spans": [
{
"start": 41,
"end": 73,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF11"
},
{
"start": 118,
"end": 139,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Pre-processing",
"sec_num": "6.1.1"
},
{
"text": "Initialization. There are two types of weights in our experiments: recurrent and non-recurrent weights. For non-recurrent weights, we initialize word embeddings with the pre-trained 200-dimensional GolVe vectors (Pennington et al., 2014) . Other weights are initialized with the Gaussian distribution",
"cite_spans": [
{
"start": 212,
"end": 237,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Network Configuration",
"sec_num": "6.1.2"
},
{
"text": "Test Clark and Curran (2007) 91.5 92.0 Lewis et al. (2014) 91.3 91.6 Lewis et al. (2016) 94.1 94.3 Xu et al. (2015) 93.1 93.0 Xu et al. (2016) 93.49 93.52 Vaswani et al. (2016) 94.24 94.5 7-layers + skip output + gating 94.51 94.67 7-layers + skip output + gating (no char) 94.33 94.45 7-layers + skip output + gating (no dropout) 94.06 94.0 9-layers + skip output + gating 94.55 94.69 Table 1 : 1-best supertagging accuracy on CCGbank. \"skip output\" refers to the skip connections to the cell output, \"gating\" refers to adding a gate to the identity function, \"no char\" refers to the models that do not use the character-level information, \"no dropout\" refers to models that do not use dropout.",
"cite_spans": [
{
"start": 5,
"end": 28,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF1"
},
{
"start": 39,
"end": 58,
"text": "Lewis et al. (2014)",
"ref_id": "BIBREF15"
},
{
"start": 69,
"end": 88,
"text": "Lewis et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 99,
"end": 115,
"text": "Xu et al. (2015)",
"ref_id": "BIBREF35"
},
{
"start": 126,
"end": 142,
"text": "Xu et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 155,
"end": 176,
"text": "Vaswani et al. (2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "N (0, 1 \u221a fan-in )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "scaled by a factor of 0.1, where fan-in is the number of units in the input layer. For recurrent weight matrices, following Saxe et al. 2013we initialize with random orthogonal matrices through SVD to avoid unstable gradients. Orthogonal initialization for recurrent weights is important in our experiments, which takes about 2% relative performance enhancement than other methods such as Xavier initialization (Glorot and Bengio, 2010) .",
"cite_spans": [
{
"start": 411,
"end": 436,
"text": "(Glorot and Bengio, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "Hyperparameters. For the word representations, we use a small window size of 3 for the convolutional layer. The dimension of the word representation after the convolutional operation is 600. The size of character embedding and capitalization embeddings are set to 5. The number of cells of the stacked bidirectional LSTM is set to 512. We also tried 400 cells or 600 cells and found this number did not impact performance so much. All stacked hidden layers have the same number of cells. The output layer has 1286 neurons, which equals to the number of tags in the training set with a RARE symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "Training. We train the networks using the back-propagation algorithm, using stochastic gradient descent (SGD) algorithm with an equal learning rate 0.02 for all layers. We also tried other optimization methods, such as momentum (Plaut and others, 1986), Adadelta (Zeiler, 2012), or Adam (Kingma and Ba, 2014) , but none of them perform as well as SGD. Gradient clipping is not used. We use on-line learning in our experiments, which means the parameters will be updated on every training sequences, one at a time. We trained the 7-layer network for roughly 2 to 3 days on one NVIDIA TITAN X GPU using Theano 1 (Team et al., 2016) .",
"cite_spans": [
{
"start": 287,
"end": 308,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 610,
"end": 629,
"text": "(Team et al., 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "Regularization. Dropout (Srivastava et al., 2014) is the only regularizer in our model to avoid overfitting. Other regularization methods such as weight decay and batch normalization do not work in our experiments. We add a binary dropout mask to the local context windows on the embedding layer, with a drop rate p of 0.25. We also apply dropout to the output of the first hidden layer and the last hidden layer, with a 0.5 drop rate. At test time, weights are scaled with a factor 1 \u2212 p. Table 1 shows the comparisons with other models for supertagging. The comparisons do not include any externally labeled data and POS labels. We use stacked bidirectional LSTMs with gated skip connections for the comparisons, and report the highest 1-best supertagging accuracy on the development set for final testing. Our model presents state-of-the-art results compared to the existing systems. The character-level information (+ 3% relative accuracy) and dropout (+ 8% relative accuracy) are necessary to improve the performance.",
"cite_spans": [
{
"start": 24,
"end": 49,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 490,
"end": 497,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dev",
"sec_num": null
},
{
"text": "We experiment with a 7-layer model on CCGbank to compare different kinds of skip connections introduced in Section 4. Our analysis mainly focuses on the identity function and the gating mechanism. The comparisons (Table 2) are summarized as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 222,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "No skip connections. When the number of stacked layers is large, the performance will degrade without skip connections. The accuracy in a 7-layer stacked model without skip connections is 93.94% (Table 2) , which is lower than the one using skip connections.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 204,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "Various kinds of skip connections. We experiment with the gated identity connections between internal states introduced in Zhang et al. 2016, but the network performs not good (Table 2, 93.14%). We also implement the method proposed in Zilly et al. 2016, which we use a single bidirectional RNH layer with a recurrent depth of 3 with a slightly modification 2 . Skip connections to the cell outputs with identity function and multiplicative gating achieves the highest accuracy (Table 2, 94.51%) on the development set. We also observe that skip to the internal states without gate get a slightly better performance (Table 2 , 94.33%) than the one with gate (94.24%) on the development set. Here we recommend to set the forget bias to 0 to get a better development accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 616,
"end": 625,
"text": "(Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "Identity mapping. We use the sigmoid function to the previous outputs to break the identity link, in which we replace g t h l\u22121 t in Eq. (15) with g t \u03c3(h l\u22121 t ), where \u03c3(x) = 1 1+e \u2212x . The result of the sigmoid function is 94.02% (Table 2) , which is poor than the identity function. We can infer that the identity function is more suitable than other scaled functions such as sigmoid or tanh to transmit information.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 242,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "Exclusive gating. Following the gating mechanism adopted in highway networks, we consider adding a gate g t to make a flexible control to the skip connections. Our gating function is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "g l t = \u03c3(W l g h l t\u22121 + U l g h l\u22122 t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "Gated identity connections are essential to achieving state-of-the-art result on CCGbank. Table 2 : Accuracy on CCGbank using 7-layer stacked bidirectional LSTMs, with different types of skip connections. b f is the bias of the forget gate. Table 3 compares the effect of the depth in the stacked models. We can observe that the performance is getting better with the increased number of layers. But when the number of layers exceeds 9, the performance will be hurt. In the experiments, we found that the number of stacked layers between 7 and 9 are the best choice using skip connections. Notice that we do not use layer-wise pretraining (Bengio et al., 2007; Simonyan and Zisserman, 2014) , which is an important technique in training deep networks.",
"cite_spans": [
{
"start": 639,
"end": 660,
"text": "(Bengio et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 661,
"end": 690,
"text": "Simonyan and Zisserman, 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 2",
"ref_id": null
},
{
"start": 241,
"end": 248,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Skip Connections",
"sec_num": "6.1.4"
},
{
"text": "Further improvements might be obtained with this method to build a deeper network with improved performance. Table 3 : Accuracy on CCGbank using gated identity connections to cell outputs, with different number of stacked layers.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Number of Layers",
"sec_num": "6.1.5"
},
{
"text": "Part-of-speech tagging is another sequential tagging task, which is to assign POS tags to each word in a sentence. It is very similar to the supertagging task. Therefore, these two tasks can be solved in a unified architecture. For POS tagging, we use the same network configurations as supertagging, except the word vocabulary size and the tag set size. We conduct experiments on the Wall Street Journal of the Penn Treebank dataset, adopting the standard splits (sections 0-18 for the train, sections 19-21 for validation and sections 22-24 for testing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-speech Tagging",
"sec_num": "6.2"
},
{
"text": "Model Test S\u00f8gaard 201197.5 Ling et al. (2015) 97.36 Wang et al. (2015) 97.78 Vaswani et al. (2016) 97.4 7-layers + skip output + gating 97.45 9-layers + skip output + gating 97.45 Table 4 : Accuracy for POS tagging on WSJ.",
"cite_spans": [
{
"start": 28,
"end": 46,
"text": "Ling et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 53,
"end": 71,
"text": "Wang et al. (2015)",
"ref_id": "BIBREF17"
},
{
"start": 78,
"end": 99,
"text": "Vaswani et al. (2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Part-of-speech Tagging",
"sec_num": "6.2"
},
{
"text": "Although the POS tagging result presented in Table 4 is slightly below the state-of-the-art, we neither do any parameter tunings nor change the network architectures, just use the one getting the best development accuracy on the supertagging task. This proves the generalization of the model and avoids heavy work of model re-designing.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Part-of-speech Tagging",
"sec_num": "6.2"
},
{
"text": "This paper investigates various kinds of skip connections in stacked bidirectional LSTM models. We present a deep stacked network (7 or 9 layers) that can be easily trained and get improved accuracy on CCG supertagging and POS tagging. Our experiments show that skip connections to the cell outputs with the gated identity function performs the best. Our explorations could easily be applied to other sequential processing problems, which can be modelled with RNN architectures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://deeplearning.net/software/theano/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our original implementation of Zilly et a. (2016) with a recurrent depth of 3 fails to converge. The reason might be due to the explosion of s t L under addition. To avoid this, we replace s t L with ot * tanh(s t L ) in the last recurrent step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007. We thank the anonymous reviewers for their useful comments that greatly improved the manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Greedy layer-wise training of deep networks",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Lamblin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Popovici",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in neural information processing systems",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. 2007. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Wide-coverage efficient statistical parsing with ccg and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James R Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R Curran. 2007. Wide-coverage efficient statistical parsing with ccg and log-linear models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical recurrent neural networks for long-term dependencies",
"authors": [
{
"first": "Salah",
"middle": [
"El"
],
"last": "Hihi",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 1995,
"venue": "NIPS",
"volume": "400",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salah El Hihi and Yoshua Bengio. 1995. Hierarchical recurrent neural networks for long-term dependencies. In NIPS, volume 400, page 409. Citeseer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating word embeddings and a revised corpus for part-of-speech tagging in portuguese",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Erick R Fonseca",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lu\u00eds",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"Maria"
],
"last": "Rosa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of the Brazilian Computer Society",
"volume": "21",
"issue": "1",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erick R Fonseca, Jo\u00e3o Lu\u00eds G Rosa, and Sandra Maria Alu\u00edsio. 2015. Evaluating word embeddings and a revised corpus for part-of-speech tagging in portuguese. Journal of the Brazilian Computer Society, 21(1):1-14.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to forget: Continual prediction with lstm",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Felix A Gers",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cummins",
"suffix": ""
}
],
"year": 2000,
"venue": "Neural computation",
"volume": "12",
"issue": "10",
"pages": "2451--2471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix A Gers, J\u00fcrgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451-2471.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Aistats",
"volume": "9",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural net- works. In Aistats, volume 9, pages 249-256.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Neural Networks",
"volume": "18",
"issue": "5",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602-610.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1308.0850"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.03385"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Training and analysing deep recurrent neural networks",
"authors": [
{
"first": "Michiel",
"middle": [],
"last": "Hermans",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schrauwen",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "190--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michiel Hermans and Benjamin Schrauwen. 2013. Training and analysing deep recurrent neural networks. In Advances in Neural Information Processing Systems, pages 190-198.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lstm can solve hard long time lag problems",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "473--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Lstm can solve hard long time lag problems. Advances in neural information processing systems, pages 473-479.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ccgbank: a corpus of CCG derivations and dependency structures extracted from the penn treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "355--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. Ccgbank: a corpus of CCG derivations and dependency structures extracted from the penn treebank. Computational Linguistics, 33(3):355-396.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Grid long short-term memory",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1507.01526"
]
},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2015. Grid long short-term memory. arXiv preprint arXiv:1507.01526.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improved CCG parsing with semi-supervised supertagging",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "327--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2014. Improved CCG parsing with semi-supervised supertagging. Transactions of the Association for Computational Linguistics, 2:327-338.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lstm ccg parsing",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. Lstm ccg parsing. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Finding function in form: Compositional character models for open vocabulary word representation",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Lu\u00eds",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Fernandez Astudillo",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.02096"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Tiago Lu\u00eds, Lu\u00eds Marujo, Ram\u00f3n Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In EMNLP, volume 14, pages 1532-43.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Experiments on learning by back propagation",
"authors": [
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Plaut",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David C Plaut et al. 1986. Experiments on learning by back propagation.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep learning made easier by linear transformations in perceptrons",
"authors": [
{
"first": "Tapani",
"middle": [],
"last": "Raiko",
"suffix": ""
},
{
"first": "Harri",
"middle": [],
"last": "Valpola",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2012,
"venue": "AISTATS",
"volume": "22",
"issue": "",
"pages": "924--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapani Raiko, Harri Valpola, and Yann LeCun. 2012. Deep learning made easier by linear transformations in perceptrons. In AISTATS, volume 22, pages 924-932.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Saxe",
"suffix": ""
},
{
"first": "Surya",
"middle": [],
"last": "Mcclelland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ganguli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6120"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning complex, extended sequences using the principle of history compression",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1992,
"venue": "Neural Computation",
"volume": "4",
"issue": "2",
"pages": "234--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00fcrgen Schmidhuber. 1992. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Very deep convolutional networks for large-scale image recognition",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1556"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semisupervised condensed nearest neighbor for part-of-speech tagging",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "48--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2011. Semisupervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 48-52. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Training very deep networks",
"authors": [
{
"first": "K",
"middle": [],
"last": "Rupesh",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2377--2385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupesh K Srivastava, Klaus Greff, and J\u00fcrgen Schmidhuber. 2015a. Training very deep networks. In Advances in neural information processing systems, pages 2377-2385.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Combinatory categorial grammar. Non-Transformational Syntax: Formal and Explicit Models of Grammar",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman and Jason Baldridge. 2011. Combinatory categorial grammar. Non-Transformational Syntax: Formal and Explicit Models of Grammar. Wiley-Blackwell.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The syntactic process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The syntactic process, volume 24. MIT Press.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Theano: A python framework for fast computation of mathematical expressions",
"authors": [
{
"first": "Theano",
"middle": [],
"last": "Development Team",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Alrfou",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Alain",
"suffix": ""
},
{
"first": "Amjad",
"middle": [],
"last": "Almahairi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Angermueller",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Ballas",
"suffix": ""
},
{
"first": "Frdric",
"middle": [],
"last": "Bastien",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Bayer",
"suffix": ""
},
{
"first": "Anatoly",
"middle": [],
"last": "Belikov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theano Development Team, Rami Alrfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frdric Bastien, Justin Bayer, and Anatoly Belikov. 2016. Theano: A python frame- work for fast computation of mathematical expressions.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Supertagging with lstms",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Musa",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with lstms. In Proceedings of the Human Language Technology Conference of the NAACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Part-of-speech tagging with bidirectional long short-term memory recurrent neural network",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Soong",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.06168"
]
},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Frank K Soong, Lei He, and Hai Zhao. 2015. Part-of-speech tagging with bidirectional long short-term memory recurrent neural network. arXiv preprint arXiv:1510.06168.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A dynamic window neural network for ccg supertagging",
"authors": [
{
"first": "Huijia",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.02749"
]
},
"num": null,
"urls": [],
"raw_text": "Huijia Wu, Jiajun Zhang, and Chengqing Zong. 2016. A dynamic window neural network for ccg supertagging. arXiv preprint arXiv:1610.02749.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "CCG supertagging with a recurrent neural network",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Michael Auli, and Stephen Clark. 2015. CCG supertagging with a recurrent neural network. Volume 2: Short Papers, page 250.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Expected f-measure training for shift-reduce parsing with recurrent neural networks",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "210--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Michael Auli, and Stephen Clark. 2016. Expected f-measure training for shift-reduce parsing with recurrent neural networks. In Proceedings of NAACL-HLT, pages 210-220.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Depth-gated lstm",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Katerina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Presented at Jelinek Summer Workshop on August",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated lstm. In Presented at Jelinek Summer Workshop on August, volume 14, page 1.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Adadelta: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Highway long short-term memory rnns for distant speech recognition",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guoguo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yaco",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5755--5759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. 2016. Highway long short-term memory rnns for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5755-5759. IEEE.",
"links": null
}
},
"ref_entries": {}
}
}