|
{ |
|
"paper_id": "N16-1036", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:36:43.036700Z" |
|
}, |
|
"title": "Recurrent Memory Networks for Language Modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Tran", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Bisazza", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin. 1 pression (Filippova et al., 2015), and machine translation (Sutskever et al., 2014).", |
|
"pdf_parse": { |
|
"paper_id": "N16-1036", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin. 1 pression (Filippova et al., 2015), and machine translation (Sutskever et al., 2014).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Recurrent Neural Networks (RNNs) (Elman, 1990; Mikolov et al., 2010) are remarkably powerful models for sequential data. Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) , a specific architecture of RNN, has a track record of success in many natural language processing tasks such as language modeling (J\u00f3zefowicz et al., 2015) , dependency parsing (Dyer et al., 2015) , sentence com-Within the context of natural language processing, a common assumption is that LSTMs are able to capture certain linguistic phenomena. Evidence supporting this assumption mainly comes from evaluating LSTMs in downstream applications: Bowman et al. (2015) carefully design two artificial datasets where sentences have explicit recursive structures. They show empirically that while processing the input linearly, LSTMs can implicitly exploit recursive structures of languages. Filippova et al. (2015) find that using explicit syntactic features within LSTMs in their sentence compression model hurts the performance of overall system. They then hypothesize that a basic LSTM is powerful enough to capture syntactic aspects which are useful for compression.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 46, |
|
"text": "(Elman, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 47, |
|
"end": 68, |
|
"text": "Mikolov et al., 2010)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 185, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 343, |
|
"text": "(J\u00f3zefowicz et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 384, |
|
"text": "(Dyer et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 899, |
|
"text": "Filippova et al. (2015)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To understand and explain which linguistic dimensions are captured by an LSTM is non-trivial. This is due to the fact that the sequences of input histories are compressed into several dense vectors by the LSTM's components whose purposes with respect to representing linguistic information is not evident. To our knowledge, the only attempt to better understand the reasons of an LSTM's performance and limitations is the work of Karpathy et al. (2015) by means of visualization experiments and cell activation statistics in the context of character-level language modeling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 452, |
|
"text": "Karpathy et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our work is motivated by the difficulty in understanding and interpreting existing RNN architectures from a linguistic point of view. We propose Recurrent Memory Network (RMN), a novel RNN architecture that combines the strengths of both LSTM and Memory Network (Sukhbaatar et al., 2015) . In RMN, the Memory Block component-a variant of Memory Network-accesses the most recent input words and selectively attends to words that are relevant for predicting the next word given the current LSTM state. By looking at the attention distribution over history words, our RMN allows us not only to interpret the results but also to discover underlying dependencies present in the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 287, |
|
"text": "(Sukhbaatar et al., 2015)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we make the following contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. We propose a novel RNN architecture that complements LSTM in language modeling. We demonstrate that our RMN outperforms competitive LSTM baselines in terms of perplexity on three large German, Italian, and English datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. We perform an analysis along various linguistic dimensions that our model captures. This is possible only because the Memory Block allows us to look into its internal states and its explicit use of additional inputs at each time step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "3. We show that, with a simple modification, our RMN can be successfully applied to NLP tasks other than language modeling. On the Sentence Completion Challenge (Zweig and Burges, 2012) , our model achieves an impressive 69.2% accuracy, surpassing the previous state of the art 58.9% by a large margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 185, |
|
"text": "(Zweig and Burges, 2012)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recurrent Neural Networks (RNNs) have shown impressive performances on many sequential modeling tasks due to their ability to encode unbounded input histories. However, training simple RNNs is difficult because of the vanishing and exploding gradient problems (Bengio et al., 1994; Pascanu et al., 2013) . A simple and effective solution for exploding gradients is gradient clipping proposed by Pascanu et al. (2013) . To address the more challenging problem of vanishing gradients, several variants of RNNs have been proposed. Among them, Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) and Gated Recurrent Unit are widely regarded as the most successful variants. In this work, we focus on LSTMs because they have been shown to outperform GRUs on language modeling tasks (J\u00f3zefowicz et al., 2015) . In the following, we will detail the LSTM architecture used in this work. Long Short-Term Memory Notation: Throughout this paper, we denote matrices, vectors, and scalars using bold uppercase (e. g., W), bold lowercase (e. g., b) and lowercase (e. g., \u03b1) letters, respectively. The LSTM used in this work is specified as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 281, |
|
"text": "(Bengio et al., 1994;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 303, |
|
"text": "Pascanu et al., 2013)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 416, |
|
"text": "Pascanu et al. (2013)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 597, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 783, |
|
"end": 808, |
|
"text": "(J\u00f3zefowicz et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "i t = sigm(W xi x t + W hi h t\u22121 + b i ) j t = sigm(W xj x t + W hj h t\u22121 + b j ) f t = sigm(W xf x t + W hf h t\u22121 + b f ) o t = tanh(W xo x t + W ho h t\u22121 + b o ) c t = c t\u22121 f t + i t j t h t = tanh(c t ) o t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where x t is the input vector at time step t, h t\u22121 is the LSTM hidden state at the previous time step, W * and b * are weights and biases. The symbol denotes the Hadamard product or element-wise multiplication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Despite the popularity of LSTM in sequential modeling, its design is not straightforward to justify and understanding why it works remains a challenge (Hermans and Schrauwen, 2013; Chung et al., 2014; Greff et al., 2015; J\u00f3zefowicz et al., 2015; Karpathy et al., 2015) . There have been few recent attempts to understand the components of an LSTM from an empirical point of view: Greff et al. (2015) carry out a large-scale experiment of eight LSTM variants. The results from their 5,400 experimental runs suggest that forget gates and output gates are the most critical components of LSTMs. J\u00f3zefowicz et al. (2015) conduct and evaluate over ten thousand RNN architectures and find that the initialization of the forget gate bias is crucial to the LSTM's performance. While these findings are important to help choosing appropriate LSTM architectures, they do not shed light on what information is captured by the hidden states of an LSTM. Bowman et al. (2015) show that a vanilla LSTM, such as described above, performs reasonably well compared to a recursive neural network (Socher et al., 2011) that explicitly exploits tree structures on two artificial datasets. They find that LSTMs can effectively exploit recursive structure in the artificial datasets. In contrast to these simple datasets containing a few logical operations in their experiments, natural languages exhibit highly complex patterns. The extent to which linguistic assumptions about syntactic structures and compositional semantics are reflected in LSTMs is rather poorly understood. Thus it is desirable to have a more principled mechanism allowing us to inspect recurrent architectures from a linguistic perspective. In the following section, we propose such a mechanism.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 180, |
|
"text": "(Hermans and Schrauwen, 2013;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 200, |
|
"text": "Chung et al., 2014;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 220, |
|
"text": "Greff et al., 2015;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 245, |
|
"text": "J\u00f3zefowicz et al., 2015;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 268, |
|
"text": "Karpathy et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 399, |
|
"text": "Greff et al. (2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 935, |
|
"end": 961, |
|
"text": "LSTM. Bowman et al. (2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1077, |
|
"end": 1098, |
|
"text": "(Socher et al., 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Neural Networks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "It has been demonstrated that RNNs can retain input information over a long period. However, existing RNN architectures make it difficult to analyze what information is exactly retained at their hidden states at each time step, especially when the data has complex underlying structures, which is common in natural language. Motivated by this difficulty, we propose a novel RNN architecture called Recurrent Memory Network (RMN). On linguistic data, the RMN allows us not only to qualify which linguistic information is preserved over time and why this is the case but also to discover dependencies within the data (Section 5). Our RMN consists of two components: an LSTM and a Memory Block (MB) (Section 3.1). The MB takes the hidden state of the LSTM and compares it to the most recent inputs using an attention mechanism (Gregor et al., 2015; Graves et al., 2014) . Thus, analyzing the attention weights of a trained model can give us valuable insight into the information that is retained over time in the LSTM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 824, |
|
"end": 845, |
|
"text": "(Gregor et al., 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 866, |
|
"text": "Graves et al., 2014)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Memory Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the following, we describe in detail the MB architecture and the combination of the MB and the LSTM to form an RMN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Memory Network", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The Memory Block (Figure 1 ) is a variant of Memory Network (Sukhbaatar et al., 2015) with one hop (or a single-layer Memory Network). At time step t, the MB receives two inputs: the hidden state h t of the LSTM and a set {x i } of n most recent words including the current word x t . We refer to n as the memory size. Internally, the MB consists of Figure 1 : A graphical representation of the MB. two lookup tables M and C of size", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 85, |
|
"text": "(Sukhbaatar et al., 2015)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 26, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 358, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "softmax {x i } h m h P m i c i \u21e5 g", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "|V | \u00d7 d,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where |V | is the size of the vocabulary. With a slight abuse of notation we denote M i = M({x i }) and C i = C({x i }) as n \u00d7 d matrices where each row corresponds to an input memory embedding m i and an output memory embedding c i of each element of the set {x i }. We use the matrix M i to compute an attention distribution over the set {x i }:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p t = softmax(M i h t )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "When dealing with data that exhibits a strong temporal relationship, such as natural language, an additional temporal matrix T \u2208 R n\u00d7d can be used to bias attention with respect to the position of the data points. In this case, equation 1 becomes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p t = softmax (M i + T)h t", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We then use the attention distribution p t to compute a context vector representation of {x i }:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "s t = C T i p t (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Finally, we combine the context vector s t and the hidden state h t by a function g(\u2022) to obtain the output h m t of the MB. Instead of using a simple addition function g(s t , h t ) = s t + h t as in Sukhbaatar et al. 2015, we propose to use a gating unit that decides how much it should trust the hidden state h t and context s t at time step t. Our gating unit is a form of Gated Recurrent Unit Chung et al., 2014) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 398, |
|
"end": 417, |
|
"text": "Chung et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z t = sigm(W sz s t + U hz h t ) (4) r t = sigm(W sr s t + U hr h t )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t = tanh(Ws t + U(r t h t ))", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h m t = (1 \u2212 z t ) h t + z t h t (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where z t is an update gate, r t is a reset gate. The choice of the composition function g(\u2022) is crucial for the MB especially when one of its input comes from the LSTM. The simple addition function might overwrite the information within the LSTM's hidden state and therefore prevent the MB from keeping track of information in the distant past. The gating function, on the other hand, can control the degree of information that flows from the LSTM to the MB's output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Memory Block", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As explained above, our proposed MB receives the hidden state of the LSTM as one of its input. This leads to an intuitive combination of the two units by stacking the MB on top of the LSTM. We call this architecture Recurrent-Memory (RM). The RM architecture, however, does not allow interaction between Memory Blocks at different time steps. To enable this interaction we can stack one more LSTM layer on top of the RM. We call this architecture Recurrent-Memory-Recurrent (RMR). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RMN Architectures", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Language models play a crucial role in many NLP applications such as machine translation and speech recognition. Language modeling also serves as a standard test bed for newly proposed models (Sukhbaatar et al., 2015; Kalchbrenner et al., 2015) . We conjecture that, by explicitly accessing history words, RMNs will offer better predictive power than the existing recurrent architectures. We therefore evaluate our RMN architectures against state-of-theart LSTMs in terms of perplexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 217, |
|
"text": "(Sukhbaatar et al., 2015;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 244, |
|
"text": "Kalchbrenner et al., 2015)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Model Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate our models on three languages: English, German, and Italian. We are especially interested in German and Italian because of their larger vocabularies and complex agreement patterns. Ta (Bojar et al., 2015) . For German, we use the first 6M tokens from the News Commentary data and 16M tokens from News Crawl 2014 for training. For development and test data we use the remaining part of the News Commentary data concatenated with the WMT 2009-2014 test sets. Finally, for Italian, we use a selection of 29M tokens from the PAIS\u00c0 corpus (Lyding et al., 2014) , mainly including Wikipedia pages and, to a minor extent, Wikibooks and Wikinews documents. For development and test we randomly draw documents from the same corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 216, |
|
"text": "(Bojar et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 567, |
|
"text": "(Lyding et al., 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 195, |
|
"text": "Ta", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our baselines are a 5-gram language model with Kneser-Ney smoothing, a Memory Network (MemN) (Sukhbaatar et al., 2015) , a vanilla singlelayer LSTM, and two stacked LSTMs with two and three layers respectively. N-gram models have been used intensively in many applications for their excellent performance and fast training. Chen et al. (2015) show that n-gram model outperforms a popular feed-forward language model (Bengio et al., 2003 ) on a one billion word benchmark (Chelba et al., 2013) . While taking longer time to train, RNNs have been proven superior to n-gram models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 118, |
|
"text": "(Sukhbaatar et al., 2015)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 342, |
|
"text": "Chen et al. (2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 436, |
|
"text": "(Bengio et al., 2003", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 492, |
|
"text": "(Chelba et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We compare these baselines with our two model architectures: RMR and RM. For each of our models, we consider two settings: with or without temporal matrix (+tM or -tM), and linear vs. gating composition function. In total, we experiment with eight RMN variants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "For all neural network models, we set the dimension of word embeddings, the LSTM hidden states, its gates, the memory input, and output embeddings to 128. The memory size is set to 15. The bias of the LSTM's forget gate is initialized to 1 (J\u00f3zefowicz et al., 2015) while all other parameters are initialized uniformly in (\u22120.05, 0.05). The initial learning rate is set to 1 and is halved at each epoch after the forth epoch. All models are trained for 15 epochs with standard stochastic gradient descent (SGD). During training, we rescale the gradients whenever their norm is greater than 5 (Pascanu et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 265, |
|
"text": "(J\u00f3zefowicz et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 614, |
|
"text": "(Pascanu et al., 2013)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Sentences with the same length are grouped into buckets. Then, mini-batches of 20 sentences are drawn from each bucket. We do not use truncated back-propagation through time, instead gradients are fully back-propagated from the end of each sentence to its beginning. When feeding in a new minibatch, the hidden states of LSTMs are reset to zeros, which ensures that the data is properly modeled at the sentence level. For our RMN models, instead of using padding, at time step t < n, we use a slice T[1 : t] \u2208 R t\u00d7d of the temporal matrix T \u2208 R n\u00d7d .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Perplexities on the test data are given in Table 2 . All RMN variants largely outperform n-gram and MemN models, and most RMN variants also outperform the competitive LSTM baselines. The best results overall are obtained by RM with temporal matrix and gating composition (+tM-g).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 50, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our results agree with the hypothesis of mitigating prediction error by explicitly using the last n words in RNNs (Karpathy et al., 2015) . We further observe that using a temporal matrix always benefits the RM architectures. This can be explained by seeing the RM as a principled way to combine an LSTM and a neural n-gram model. By contrast, RMR works better without temporal matrix but its overall performance is not as good as RM. This suggests that we need a better mechanism to address the interaction between MBs, which we leave to future work. Finally, the proposed gating composition function outperforms the linear one in most cases. For historical reasons, we also run a stacked threelayer LSTM and a RM(+tM-g) on the much smaller Penn Treebank dataset (Marcus et al., 1993 ) with the same setting described above. The respective perplexities are 126.1 and 123.5.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 137, |
|
"text": "(Karpathy et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 784, |
|
"text": "(Marcus et al., 1993", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The goal of our RMN design is twofold: (i) to obtain better predictive power and (ii) to facilitate understanding of the model and discover patterns in data. In Section 4, we have validated the predictive power of the RMN and below we investigate the source of this performance based on linguistic assumptions of word co-occurrences and dependency structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attention Analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As a first step towards understanding RMN, we look at the average attention weights of each history word position in the MB of our two best model variants (Figure 3) . One can see that the attention mass tends to concentrate at the rightmost position (the current en it de -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 word) and decreases when moving further to the left (less recent words). This is not surprising since the success of n-gram language models has demonstrated that the most recent words provide important information for predicting the next word. Between the two variants, the RM average attention mass is less concentrated to the right. This can be explained by the absence of an LSTM layer on top, meaning that the MB in the RM architecture has to pay more attention to the more distant words in the past. The remaining analyses described below are performed on the RM(+tM-g) architecture as this yields the best perplexity results overall. Beyond average attention weights, we are interested in those cases where attention focuses on distant positions. To this end, we randomly sample 100 words from test data and visualize attention distributions over the last 15 words. Figure 4 shows the attention distributions for random samples of German and Italian. Again, in many cases attention weights concentrate around the last word (bottom row). However, we observe that many long distance words also receive noticeable attention mass. Interestingly, for many predicted words, attention is distributed evenly over memory positions, possibly in- dicating cases where the LSTM state already contains enough information to predict the next word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 165, |
|
"text": "(Figure 3)", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1190, |
|
"end": 1198, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To explain the long-distance dependencies, we first hypothesize that our RMN mostly memorizes frequent co-occurrences. We run the RM(+tM-g) model on the German development and test sentences, and select those pairs of (most-attendedword, word-to-predict) where the MB's attention concentrates on a word more than six positions to the left. Then, for each set of pairs with equal distance, we compute the mean frequency of corresponding co-occurrences seen in the training data ( Table 3 ). The lack of correlation between frequency and memory location suggests that RMN does more than simply memorizing frequent co-occurrences. Table 3 : Mean frequency (\u00b5) of (most-attendedword, word-to-predict) pairs grouped by relative distance (d).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 479, |
|
"end": 486, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 635, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Previous work (Hermans and Schrauwen, 2013; Karpathy et al., 2015) studied this property of LSTMs by analyzing simple cases of closing brackets. By contrast RMN allows us to discover more interesting dependencies in the data. We manually inspect those high-frequency pairs to see whether they display certain linguistic phenomena. We observe that RMN captures, for example, separable verbs and fixed expressions in German. Separable verbs are frequent in German: they typically consist of preposition+verb constructions, such ab+h\u00e4ngen ('to depend') or aus+schlie\u00dfen ('to exclude'), and can be spelled together (abh\u00e4ngen) or apart as in 'h\u00e4ngen von der Situation ab' ('depend on the situation'), depending on the grammatical construction. Figure 5a shows a long-dependency example for the separable verb abh\u00e4ngen (to depend). When predicting the verb's particle ab, the model correctly attends to the verb's core h\u00e4ngt occurring seven words to the left. Figure 5b and 5c show fixed expression examples from German and Italian, respectively: schl\u00fcsselrolle ... spielen (play a key role) and insignito ... titolo (awarded title). Here too, the model correctly attends to the key word despite its long distance from the word to predict.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 43, |
|
"text": "(Hermans and Schrauwen, 2013;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 44, |
|
"end": 66, |
|
"text": "Karpathy et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 739, |
|
"end": 748, |
|
"text": "Figure 5a", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 963, |
|
"text": "Figure 5b", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "ab (-1.8) und (-2.1) , (-2.5) . (-2.7) von (-2.8) (a) wie wirksam die daraus resultierende strategie sein wird , h\u00e4ngt daher von der genauigkeit dieser annahmen Gloss: how effective the from-that resulting strategy be will, depends therefore on the accuracy of-these measures Translation: how effective the resulting strategy will be, therefore, depends on the accuracy of these measures spielen (-1.9) gewinnen (-3.0) finden (-3.4) haben (-3.4) schaffen (-3.4) \u2026 die lage versetzen werden , eine schl\u00fcsselrolle bei der eind\u00e4mmung der regionalen ambitionen chinas zu Gloss: \u2026 the position place will, a key-role in the curbing of-the regional ambitions China's to", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 9, |
|
"text": "(-1.8)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 50, |
|
"end": 53, |
|
"text": "(a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 402, |
|
"text": "(-1.9)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 461, |
|
"text": "(-3.4)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Translation: \u2026which will put him in a position to play a key role in curbing the regional ambitions of China (b) sacro (-1.5)", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 112, |
|
"text": "(b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "titolo (-2.9) re (-3.0) <unk> (-3.1) leone (-3.6) ... che fu insignito nel 1692 dall' Imperatore Leopoldo I del Other interesting examples found by the RMN in the test data include:", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 13, |
|
"text": "(-2.9)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 43, |
|
"end": 49, |
|
"text": "(-3.6)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "German: findet statt (takes place), kehrte zur\u00fcck (came back), fragen antworten (questions answers), k\u00e4mpfen gegen (fight against), bleibt erhalten (remains intact), verantwortung ubernimmt (takes responsibility);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Italian: sinistra destra (left right), latitudine longitudine (latitude longitude), collegata tramite (connected through), spos\u00f2 figli (got-married children), insignito titolo (awarded title).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Positional and lexical analysis", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "It has been conjectured that RNNs, and LSTMs in particular, model text so well because they capture syntactic structure implicitly. Unfortunately this has been hard to prove, but with our RMN model we can get closer to answering this important question. We produce dependency parses for our test sets using (Sennrich et al., 2013) for German and (Attardi et al., 2009) for Italian. Next we look at how much attention mass is concentrated by the RM(+tM-g) model on different dependency types. Figure 6 shows, for each language, a selection of ten dependency types that are often long-distance. 2 Dependency direction is marked by an arrow: e.g. \u2192mod means that the word to predict is a modifier of the attended word, while mod\u2190 means that the attended word is a modifier of the word to predict. 3 White cells denote combinations of position and dependency type that were not present in the test data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 307, |
|
"end": 330, |
|
"text": "(Sennrich et al., 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 368, |
|
"text": "(Attardi et al., 2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 794, |
|
"end": 795, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 492, |
|
"end": 500, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "While in most of the cases closest positions are attended the most, we can see that some dependency types also receive noticeably more attention than the average (ALL) on the long-distance positions. In German, this is mostly visible for the head of separable verb particles (\u2192avz), which nicely supports our observations in the lexical analysis (Section 5.1). Other attended dependencies include: auxiliary verbs (\u2192aux) when predicting the second element of a complex tense (hat . . . gesagt / has said); subordinating conjunctions (konj\u2190) when predicting the clause-final inflected verb (dass sie sagen sollten / that they should say); control verbs (\u2192obji) when predicting the infinitive verb (versucht ihr zu helfen / tries to help her). Out of the Italian dependency types selected for their frequent longdistance occurrences (bottom of Figure 6 ), the most attended are argument heads (\u2192arg), complement heads (\u2192comp), object heads (\u2192obj) and subjects (subj\u2190). This suggests that RMN is mainly capturing predicate argument structure in Italian. Notice that syntactic annotation is never used to train the model, but only to analyze its predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 842, |
|
"end": 850, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We can also use RMN to discover which complex dependency paths are important for word prediction. To mention just a few examples, high attention on [-15, -12] [-11, -8] [ the German path [subj\u2190,\u2192kon,\u2192cj] indicates that the model captures morphological agreement between coordinate clauses in non-trivial constructions of the kind: spielen die Kinder im Garten und singen / the children play in the garden and sing. In Italian, high attention on the path [\u2192obj,\u2192comp,\u2192prep] denotes cases where the semantic relatedness between a verb and its object does not stop at the object's head, but percolates down to a prepositional phrase attached to it (pass\u00f2 buona parte della sua vita / spent a large part of his life). Interestingly, both local n-gram context and immediate dependency context would have missed these relations. While much remains to be explored, our analysis shows that RMN discovers patterns far more complex than pairs of opening and closing brackets, and suggests that the network's hidden state captures to a large extent the underlying structure of text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 158, |
|
"text": "[-15, -12]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 168, |
|
"text": "[-11, -8]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 203, |
|
"text": "[subj\u2190,\u2192kon,\u2192cj]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 472, |
|
"text": "[\u2192obj,\u2192comp,\u2192prep]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The Microsoft Research Sentence Completion Challenge (Zweig and Burges, 2012) has recently be-come a test bed for advancing statistical language modeling. We choose this task to demonstrate the effectiveness of our RMN in capturing sentence coherence. The test set consists of 1,040 sentences selected from five Sherlock Holmes novels by Conan Doyle. For each sentence, a content word is removed and the task is to identify the correct missing word among five given candidates. The task is carefully designed to be non-solvable for local language models such as n-gram models. The best reported result is 58.9% accuracy 4 which is far below human accuracy of 91% (Zweig and Burges, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 77, |
|
"text": "(Zweig and Burges, 2012)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 663, |
|
"end": 687, |
|
"text": "(Zweig and Burges, 2012)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As baseline we use a stacked three-layer LSTM. Our models are two variants of RM(+tM-g), each consisting of three LSTM layers followed by a MB. The first variant (unidirectional-RM) uses n words preceding the word to predict, the second (bidirectional-RM) uses the n words preceding and the n words following the word to predict, as MB input. We include bidirectional-RM in the experiments to show the flexibility of utilizing future context in RMN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We train all models on the standard training data of the challenge, which consists of 522 novels from Project Gutenberg, preprocessed similarly to (Mnih and Kavukcuoglu, 2013) . After sentence splitting, tokenization and lowercasing, we randomly select 19,000 sentences for validation. Training and validation sets include 47M and 190K tokens respectively. The vocabulary size is about 64,000.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 175, |
|
"text": "(Mnih and Kavukcuoglu, 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We initialize and train all the networks as described in Section 4.2. Moreover, for regularization, we place dropout (Srivastava et al., 2014) after each LSTM layer as suggested in (Pham et al., 2014) . The dropout rate is set to 0.3 in all the experiments. Table 4 summarizes the results. It is worth to mention that our LSTM baseline outperforms a dependency RNN making explicit use of syntactic information (Mirowski and Vlachos, 2015) and performs on par with the best published result . Our unidirectional-RM sets a new state of the art for the Sentence Completion Challenge with 69.2% accuracy. Under the same setting of d we observe that using bidirectional context does not The stage lost a fine , even as science lost an acute reasoner , when he became a specialist in crime a) linguist b) hunter c) actor \u2663 d) estate e) horseman \u2666 What passion of hatred can it be which leads a man to in such a place at such a time a) lurk \u2663 b) dine \u2666 c) luxuriate d) grow e) wiggle My heart is already since i have confided my trouble to you a) falling", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 142, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 200, |
|
"text": "(Pham et al., 2014)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 438, |
|
"text": "(Mirowski and Vlachos, 2015)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 265, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "b) distressed \u2666 c) soaring d) lightened \u2663 e) punished", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "My morning's work has not been , since it has proved that he has the very strongest motives for standing in the way of anything of the sort a) invisible b) neglected \u2666\u2663 c) overlooked d) wasted e) deliberate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "That is his fault , but on the whole he's a good worker a) main b) successful c) mother's \u2663 d) generous e) favourite \u2666 Figure 7 : Examples of sentence completion. The correct option is in boldface. Predictions by the LSTM baseline and by our best RMN model are marked by \u2666 and \u2663 respectively. Table 4 : Accuracy on 1,040 test sentences. We use perplexity to choose the best model. Dimension of word embeddings, LSTM hidden states, and gate g parameters are set to d.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 127, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 300, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "bring additional advantage to the model. Mnih and Kavukcuoglu (2013) also report a similar observation. We believe that RMN may achieve further improvements with hyper-parameter optimization. Figure 7 shows some examples where our best RMN beats the already very competitive LSTM baseline, or where both models fail. We can see that in some sentences the necessary clues to predict the correct word occur only to its right. While this seems to conflict with the worse result obtained by the bidirectional-RM, it is important to realize that prediction corresponds to the whole sentence probability. Therefore a badly chosen word can have a negative effect on the score of future words. This appears to be particularly true for the RMN due to its ability to directly access (distant) words in the history. The better performance of unidirectional ver-sus bidirectional-RM may indicate that the attention in the memory block can be distributed reliably only on words that have been already seen and summarized by the current LSTM state. In future work, we may investigate whether different ways to combine two RMNs running in opposite directions further improve accuracy on this challenging task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 68, |
|
"text": "Mnih and Kavukcuoglu (2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 200, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Completion Challenge", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have proposed the Recurrent Memory Network (RMN), a novel recurrent architecture for language modeling. Our RMN outperforms LSTMs in terms of perplexity on three large dataset and allows us to analyze its behavior from a linguistic perspective. We find that RMNs learn important co-occurrences regardless of their distance. Even more interestingly, our RMN implicitly captures certain dependency types that are important for word prediction, despite being trained without any syntactic information. Finally RMNs obtain excellent performance at modeling sentence coherence, setting a new state of the art on the challenging sentence completion task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Our code and data are available at https://github. com/ketranm/RMN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The full plots are available at https://github.com/ ketranm/RMN. The German and Italian tag sets are explained in(Simi et al., 2014) and(Foth, 2006) respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Some dependency directions, like obj\u2190 in Italian, are almost never observed due to order constraints of the language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The authors use a weighted combination of skip-ngram and RNN without giving any technical details.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project numbers 639.022.213 and 612.001.218.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Accurate dependency parsing with a stacked multilayer perceptron", |
|
"authors": [ |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Attardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felice", |
|
"middle": [], |
|
"last": "Dell'orletta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Simi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Evalita'09, Evaluation of NLP and Speech Tools for Italian", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Giuseppe Attardi, Felice Dell'Orletta, Maria Simi, and Joseph Turian. 2009. Accurate dependency parsing with a stacked multilayer perceptron. In Proceedings of Evalita'09, Evaluation of NLP and Speech Tools for Italian, Reggio Emilia, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ICLR 2015", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR 2015, San Diego, CA, USA, May.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning long-term dependencies with gradient descent is difficult", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Simard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Frasconi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Transaction on Neural Networks", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "157--166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Transaction on Neural Networks, 5(2):157-166, March.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A neural probabilistic language model", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9jean", |
|
"middle": [], |
|
"last": "Ducharme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Janvin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1137--1155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. J. Mach. Learn. Res., 3:1137-1155, March.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Findings of the 2015 workshop on statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajen", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Hokamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varvara", |
|
"middle": [], |
|
"last": "Logacheva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Negri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Turchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Transla- tion, pages 1-46, Lisbon, Portugal, September. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Tree-structured composition in neural networks without tree-structured architectures", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of Proceedings of the NIPS 2015 Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Christopher D. Manning, and Christopher Potts. 2015. Tree-structured composi- tion in neural networks without tree-structured archi- tectures. In Proceedings of Proceedings of the NIPS 2015 Workshop on Cognitive Computation: Integrat- ing Neural and Symbolic Approaches, December.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "One billion word benchmark for measuring progress in statistical language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Brants", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phillipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Robinson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. Technical report, Google.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Strategies for Training Large Vocabulary Neural Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Welin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Welin Chen, David Grangier, and Michael Auli. 2015. Strategies for Training Large Vocabulary Neural Lan- guage Models. ArXiv e-prints, December.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "On the properties of neural machine translation: Encoder-decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the proper- ties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Junyoung", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "NIPS Deep Learning and Representation Learning Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS Deep Learning and Representation Learning Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Transitionbased dependency parsing with stack long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "Matthews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "334--343", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 334-343, Beijing, China, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Finding structure in time", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Cognitive Science", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "179--211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14(2):179-211.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Sentence compression by deletion with lstms", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Alfonseca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Colmenares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "360--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Filippova, Enrique Alfonseca, Carlos A. Col- menares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 360-368, Lisbon, Portugal, September. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Eine umfassende Constraint-Dependenz-Grammatik des Deutschen", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Foth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Fachbereich Informatik", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilian A. Foth. 2006. Eine umfassende Constraint- Dependenz-Grammatik des Deutschen. Fachbereich Informatik.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Neural turing machines. CoRR", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Wayne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivo", |
|
"middle": [], |
|
"last": "Danihelka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR, abs/1410.5401.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LSTM: A search space odyssey", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Greff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rupesh", |
|
"middle": [ |
|
"Kumar" |
|
], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Koutn\u00edk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Bas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Steunebrink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\u00edk, Bas R. Steunebrink, and J\u00fcrgen Schmidhuber. 2015. LSTM: A search space odyssey. CoRR, abs/1503.04069.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "DRAW: A recurrent neural network for image generation", |
|
"authors": [ |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Jimenez Rezende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daan", |
|
"middle": [], |
|
"last": "Wierstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1462--1471", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 1462-1471.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Training and analysing deep recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Michiel", |
|
"middle": [], |
|
"last": "Hermans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Schrauwen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "190--198", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michiel Hermans and Benjamin Schrauwen. 2013. Training and analysing deep recurrent neural net- works. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 190-198. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780, November.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An empirical exploration of recurrent network architectures", |
|
"authors": [ |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "J\u00f3zefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Zaremba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2342--2350", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rafal J\u00f3zefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd Interna- tional Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2342-2350.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Grid long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivo", |
|
"middle": [], |
|
"last": "Danihelka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2015. Grid long short-term memory. CoRR, abs/1507.01526.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Visualizing and understanding recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "Andrej", |
|
"middle": [], |
|
"last": "Karpathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei-Fei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. CoRR, abs/1506.02078.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The PAIS\u00c0 corpus of italian web texts", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Dittmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vito", |
|
"middle": [], |
|
"last": "Lenci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pirrelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 9th Web as Corpus Workshop (WaC-9", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "36--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dittmann, Alessandro Lenci, and Vito Pirrelli. 2014. The PAIS\u00c0 corpus of italian web texts. In Proceedings of the 9th Web as Corpus Workshop (WaC-9), pages 36-43, Gothenburg, Sweden, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Building a large annotated corpus of english: The penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Comput. Linguist", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of english: The penn treebank. Comput. Linguist., 19(2):313-330, June.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Recurrent neural network based language model", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Karafi\u00e1t", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luk\u00e1s", |
|
"middle": [], |
|
"last": "Burget", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Cernock\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1045--1048", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2010. Re- current neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045-1048.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Dependency recurrent neural language models for sentence completion", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Mirowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Vlachos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "511--517", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Mirowski and Andreas Vlachos. 2015. Depen- dency recurrent neural language models for sentence completion. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 511-517, Beijing, China, July. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Learning word embeddings efficiently with noise-contrastive estimation", |
|
"authors": [ |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "2265--2273", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive es- timation. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 2265-2273. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "On the difficulty of training recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ICML", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "1310--1318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural net- works. In ICML (3), volume 28 of JMLR Proceedings, pages 1310-1318.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Dropout improves recurrent neural networks for handwriting recognition", |
|
"authors": [ |
|
{ |
|
"first": "Vu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Bluche", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Th\u00e9odore", |
|
"middle": [], |
|
"last": "Kermorvant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e9r\u00f4me", |
|
"middle": [], |
|
"last": "Louradour", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Frontiers in Handwriting Recognition (ICFHR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "285--290", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vu Pham, Christopher Bluche, Th\u00e9odore Kermorvant, and J\u00e9r\u00f4me Louradour. 2014. Dropout improves re- current neural networks for handwriting recognition. In International Conference on Frontiers in Handwrit- ing Recognition (ICFHR), pages 285-290, Sept.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Exploiting synergies between open resources for german dependency parsing, pos-tagging, and morphological analysis", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Volk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerold", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "601--609", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Martin Volk, and Gerold Schneider. 2013. Exploiting synergies between open resources for ger- man dependency parsing, pos-tagging, and morpho- logical analysis. In Recent Advances in Natural Lan- guage Processing (RANLP 2013), pages 601-609, September.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Less is more? towards a reduced inventory of categories for training a parser for the italian stanford dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Simi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cristina", |
|
"middle": [], |
|
"last": "Bosco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simonetta", |
|
"middle": [], |
|
"last": "Montemagni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Simi, Cristina Bosco, and Simonetta Montemagni. 2014. Less is more? towards a reduced inventory of categories for training a parser for the italian stanford dependencies. In Proceedings of the Ninth Interna- tional Conference on Language Resources and Evalu- ation (LREC'14), Reykjavik, Iceland, may. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Pro- cessing, EMNLP '11, pages 151-161, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929- 1958, January.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "End-to-end memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Sainbayar", |
|
"middle": [], |
|
"last": "Sukhbaatar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Szlam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "2431--2439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 28, pages 2431- 2439. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Advances in Neural Information Processing Systems", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "A challenge set for advancing language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [ |
|
"J C" |
|
], |
|
"last": "Burges", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, WLM '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geoffrey Zweig and Chris J. C. Burges. 2012. A chal- lenge set for advancing language modeling. In Pro- ceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Fu- ture of Language Modeling for HLT, WLM '12, pages 29-36, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "A graphical illustration of an unfolded RMR with memory size 4. Dashed line indicates concatenation. The MB takes the output of the bottom LSTM layer and the 4-word history as its input. The output of the MB is then passed to the second LSTM layer on top. There is no direct connection between MBs of different time steps. The last LSTM layer carries the MB's outputs recurrently.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Average attention per position of RMN history. Top: RMR(-tM-g), bottom: RM(+tM-g). Rightmost positions represent most recent history.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Attention visualization of 100 word samples. Bottom positions in each plot represent most recent history. Darker color means higher weight.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "who was awarded in 1692 by-the Emperor Leopold I of-the Translation: \u2026 who was awarded the title by Emperor Leopold I in 1692 (c) Examples of distant memory positions attended by RMN. The resulting top five word predictions are shown with the respective log-probabilities. The correct choice (in bold) was ranked first in sentences (a,b) and second in (c).", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "Average attention weights per position, broken down by dependency relation type+direction between the attended word and the word to predict. Top: German. Bottom: Italian. More distant positions are binned.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td colspan=\"2\">Lang Train Dev</td><td>Test</td><td colspan=\"2\">|s| |V |</td></tr><tr><td>En</td><td colspan=\"2\">26M 223K 228K</td><td>26</td><td>77K</td></tr><tr><td>De</td><td colspan=\"2\">22M 202K 203K</td><td colspan=\"2\">22 111K</td></tr><tr><td>It</td><td colspan=\"2\">29M 207K 214K</td><td colspan=\"2\">29 104K</td></tr></table>", |
|
"num": null, |
|
"text": "ble 1 summarizes the data used in our experiments.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Data statistics. |s| denotes the average sentence length and |V | the vocabulary size. The training data correspond to approximately 1M sentences in each language. For English, we use all the News Commentary data (8M tokens) and 18M tokens from News Crawl 2014 for training. Development and test data are randomly drawn from the concatenation of the WMT 2009-2014 test sets", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>: Perplexity comparison including RMN</td></tr><tr><td>variants with and without temporal matrix (tM) and</td></tr><tr><td>linear (l) versus gating (g) composition function.</td></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |