ACL-OCL / Base_JSON /prefixE /json /emnlp /2020.emnlp-main.103.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:27:47.037752Z"
},
"title": "Scaling Hidden Markov Language Models",
"authors": [
{
"first": "Justin",
"middle": [
"T"
],
"last": "Chiu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The hidden Markov model (HMM) is a fundamental tool for sequence modeling that cleanly separates the hidden state from the emission structure. However, this separation makes it difficult to fit HMMs to large datasets in modern NLP, and they have fallen out of use due to very poor performance compared to fully observed models. This work revisits the challenge of scaling HMMs to language modeling datasets, taking ideas from recent approaches to neural modeling. We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization. Experiments show that this approach leads to models that are more accurate than previous HMM and n-gram-based methods, making progress towards the performance of state-of-the-art neural models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The hidden Markov model (HMM) is a fundamental tool for sequence modeling that cleanly separates the hidden state from the emission structure. However, this separation makes it difficult to fit HMMs to large datasets in modern NLP, and they have fallen out of use due to very poor performance compared to fully observed models. This work revisits the challenge of scaling HMMs to language modeling datasets, taking ideas from recent approaches to neural modeling. We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization. Experiments show that this approach leads to models that are more accurate than previous HMM and n-gram-based methods, making progress towards the performance of state-of-the-art neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Hidden Markov models (HMMs) are a fundamental latent-variable model for sequential data, with a rich history in NLP. They have been used extensively in tasks such as tagging (Merialdo, 1994) , alignment (Vogel et al., 1996) , and even, in a few cases, language modeling (Kuhn et al., 1994; Huang, 2011) . Compared to other sequence models, HMMs are appealing since they fully separate the process of generating hidden states from observations, while allowing for exact posterior inference.",
"cite_spans": [
{
"start": 174,
"end": 190,
"text": "(Merialdo, 1994)",
"ref_id": "BIBREF17"
},
{
"start": 203,
"end": 223,
"text": "(Vogel et al., 1996)",
"ref_id": null
},
{
"start": 270,
"end": 289,
"text": "(Kuhn et al., 1994;",
"ref_id": "BIBREF12"
},
{
"start": 290,
"end": 302,
"text": "Huang, 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "State-of-the-art systems in NLP have moved away from utilizing latent hidden states and toward deterministic deep neural models. We take several lessons from the success of neural models for NLP tasks: (a) model size is critical for accuracy, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Code available at github.com/harvardnlp/hmm-lm large LSTMs (Zaremba et al., 2014) show marked improvements in performance; (b) the right parameterization is critically important for representation learning, e.g. a feedforward model (Bengio et al., 2003) can have the same distributional assumptions as an n-gram model while performing significantly better; (c) dropout is key to achieving strong performance (Zaremba et al., 2014; Merity et al., 2017) .",
"cite_spans": [
{
"start": 53,
"end": 81,
"text": "LSTMs (Zaremba et al., 2014)",
"ref_id": null
},
{
"start": 232,
"end": 253,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 408,
"end": 430,
"text": "(Zaremba et al., 2014;",
"ref_id": null
},
{
"start": 431,
"end": 451,
"text": "Merity et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We revisit HMMs for language modeling as an alternative to modern neural models, while considering key empirical lessons from these approaches. Towards that goal, we introduce three techniques: a modeling constraint that allows us to use a large number of hidden states while maintaining efficient exact inference, a neural parameterization that improves generalization while remaining faithful to the probabilistic structure of the HMM, and a variant of dropout that both improves accuracy and halves the computational overhead during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments employ HMMs on two language modeling datasets. Our approach allows us to train an HMM with tens of thousands of states while maintaining efficiency and significantly outperforming past HMMs as well as n-gram models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to improve the performance of HMMs on language modeling, several recent papers have combined HMMs with neural networks. Buys et al. (2018) develop an approach to relax HMMs, but their models either perform poorly or alter the probabilistic structure to resemble an RNN. Krakovna and Doshi-Velez (2016) utilize model combination with an RNN to connect both approaches in a small state-space model. Our method instead focuses on scaling pure HMMs to a large number of states.",
"cite_spans": [
{
"start": 129,
"end": 147,
"text": "Buys et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 279,
"end": 310,
"text": "Krakovna and Doshi-Velez (2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Prior work has also considered neural parameterizations of HMMs. Tran et al. (2016) demonstrate improvements in POS induction with a neural parameterization of an HMM. They consider small state spaces, as the goal is tag induction rather than language modeling. 1 Most similar to this work are the large HMM models of Dedieu et al. (2019) . They introduce a sparsity constraint in order to train a 30K state nonneural HMM for character-level language modeling; however, their constraint precludes application to large vocabularies. We overcome this limitation and train models with neural parameterizations on word-level language modeling.",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "HMMs. Tran et al. (2016)",
"ref_id": null
},
{
"start": 262,
"end": 263,
"text": "1",
"ref_id": null
},
{
"start": 318,
"end": 338,
"text": "Dedieu et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, another approach for scaling state spaces is to grow from small to big via a split-merge process (Petrov et al., 2006; Huang, 2011) . In particular, Huang (2011) learn an HMM for language modeling via this process. As fixed-size state spaces are amenable to batching on modern hardware, we leave split-merge procedures for future work.",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Petrov et al., 2006;",
"ref_id": "BIBREF22"
},
{
"start": 128,
"end": 140,
"text": "Huang, 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We are interested in learning a distribution over observed tokens x = x 1 , . . . , x T , with each token x t an element of the finite vocabulary X . Hidden Markov models (HMMs) specify a joint distribution over observed tokens x and discrete latent states z = z 1 , . . . , z T , with each z t from the finite set Z. For notational convenience, we define the starting state z 0 = . This yields the joint distribution,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "p(x, z; \u03b8) = T t=1 p(x t | z t )p(z t | z t\u22121 ). (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "We refer to the transition and emission matrices as the distributional parameters of the HMM. Specifically, let A \u2208 [0, 1] |Z|\u00d7|Z| be the transition probabilities and O \u2208 [0, 1] |Z|\u00d7|X | the emission probabilities,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "p(z t | z t\u22121 ) = A z t\u22121 zt p(x t | z t ) = O ztxt . (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "We distinguish between two types of model parameterizations: scalar and neural, where the model parameters are given by \u03b8. A scalar parameterization sets the model parameters equal to the distributional parameters, so that \u03b8 = {A, O}, resulting in O(|Z| 2 + |Z||X |) model parameters. A neural parameterization instead generates the distributional parameters from a neural network (with parameters \u03b8), decoupling the size of \u03b8 from A, O. This decoupling gives us the ability to choose between compact or overparameterized \u03b8 (relative to A, O). As we scale to large state spaces, we take advantage of compact neural parameterizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "In order to fit an HMM to data x, we must marginalize over the latent states to obtain the likelihood p(x) = z p(x, z). This sum can be computed in time O(T |Z| 2 ) via the forward algorithm, which becomes prohibitive if the number of latent states |Z| is large. We can then optimize the likelihood with gradient ascent (or alternative variants of expectation maximization). HMMs and RNNs Although the forward algorithm resembles that of the forward pass in a recurrent neural network (RNN) (Buys et al., 2018) , there are key representational differences. RNNs do not decouple the latent dynamics from the observed. This often leads to improved accuracy, but precludes posterior inference which is useful for interpretability. A further benefit of HMMs over RNNs is that their associative structure allows for parallel inference via the prefix-sum algorithm (Ladner and Fischer, 1980 ). 2 Finally, HMMs bottleneck information from every timestep through a discrete hidden state. NLP has a long history of utilizing discrete representations, and discrete representations may yield interesting results. For example, recent work has found that discrete latent variables work well in low-resource regimes (Jin et al., 2020) .",
"cite_spans": [
{
"start": 491,
"end": 510,
"text": "(Buys et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 859,
"end": 884,
"text": "(Ladner and Fischer, 1980",
"ref_id": "BIBREF13"
},
{
"start": 1202,
"end": 1220,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: HMMs",
"sec_num": "3"
},
{
"text": "We propose three extensions to scale HMMs for better language modeling performance: blocked emissions, which allow for very large models; neural parameterization, which makes it easy for states to share model parameters; and state dropout, which encourages broader state usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Blocked Emissions Our main goal is to apply a HMM with a large number of hidden states to learn the underlying dynamics of language data. However, the O(T |Z| 2 ) complexity of marginal inference practically limits the number of HMM states. We can get around this limit by making an assump-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "E z E x O 1 O 2 O 3 O 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Figure 1: The emission matrix as a set of blocks O 1 , . . . , O 4 with fixed number of states k. The columns of each block may vary, as there is no constraint on the number of words a state can emit. Each non-zero cell is constructed from an MLP applied to word E x and state E z embeddings. tion on the HMM emission matrix O. As noted by Dedieu et al. (2019) , restricting the number of states that can produce each word can improve inference complexity. We utilize a slightly stronger assumption on the model: a) states are partitioned into M equal sized groups each of which emit the same subset of words, and b) each word is only admitted by one group of k = |Z|/M states which we indicate as Z x \u2282 Z.",
"cite_spans": [
{
"start": 340,
"end": 360,
"text": "Dedieu et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "We implement this group structure through a set of blocked emissions, each corresponding to one of the M state groups,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "O = \uf8ee \uf8f0 O 1 0 0 0 . . . 0 0 0 O M \uf8f9 \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "where O m \u2208 R k\u00d7|Xm| . Figure 1 shows these emission blocks. Each block matrix O m gives the probabilities for emitting tokens X m for states in group m, i.e. states (m \u2212 1)k through mk. With this constraint, exact marginalization can be computed via",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "p(x) = z 1 \u2208Zx 1 p(z 1 | z 0 )p(x 1 | z 1 )\u00d7 \u2022 \u2022 \u2022 z T \u2208Zx T p(z T | z T \u22121 )p(x T | z T ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Since there are only k states with nonzero probability of occurring at every timestep, we only need to consider transitioning from the |Z xt | = k previous states to the next |Z x t+1 | = k states, resulting in O(k 2 ) operations per timestep. This gives a serial complexity of O(T k 2 ). Neural Parameterization A larger state space allows for longer HMM memory, but it also may require more parameters. Even with blocked emissions, the scalar model parameterization of an HMM grows as O(|Z| 2 ) due to the transition matrix. A neural parameterization allows us to share parameters between words and states to capture common structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Our parameterization uses an embedding for each state in Z (E z \u2208 R |Z|\u00d7h ) and each token in X (E x \u2208 R |X |\u00d7h ). From these we can create representations for leaving and entering a state, as well as emitting a word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "H out , H in , H emit = MLP(E z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "with all in R |Z|\u00d7h . The HMM distributional parameters are then computed as, 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "O \u221d exp(H emit E x )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "A \u221d exp(H in H out ) (4) The MLP architecture follows Kim et al. (2019) , with details in the appendix. This factorized parameterization, shown in Figure 1 , reduces the total parameters to O(h 2 + h|Z| + h|X |).",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "Kim et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 147,
"end": 155,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Note that parameter computation is independent of inference and can be cached completely as the emission and transition matrices, A and O, at testtime. For the training algorithm, shown in Algorithm 1, we compute A and O once per batch while RNNs and similar models recompute emissions every token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Dropout as State Reduction Finally, to encourage full use of the large state space, we introduce dropout that prevents the model from favoring specific states. We propose a form of HMM state dropout that removes states from use entirely at each batch, which also has the added benefit of speeding up inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "b x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Figure 2: The computation of p(x) is greatly reduced by blocked emissions and state dropout. In the above trellis, each row corresponds to a latent state and each column after the first to a timestep. Each edge between nodes corresponds to a nonzero transition probability. Blocked emissions result in a small subset of all states emitting a given word, as shown by the rectangles. State dropout (leftmost column) allows us to further reduce the number of states we consider, halving the number of (white) states that have nonzero probability in each rectangle. In experiments, the number of possible transitions may be as large as 2 30 while the max number of non-zero transitions is 2 16 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "State dropout acts on each emission block O 1 , . . . , O M independently. For each block, we sample a binary dropout mask by sampling \u03bbk dropped row indices uniformly without replacement, where \u03bb is the dropout rate. We concatenate these into a global vector b \u2208 {0, 1} |Z| , which, along with the previous constraints, ensures,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(z t | z t\u22121 ) \u221d b zt A z t\u22121 zt p(x t | z t ) \u221d b zt 1(z \u2208 Z xt )O ztxt",
"eq_num": "(5)"
}
],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "An example of the HMM lattice after state dropout is show in Figure 2 . In addition to accuracy improvements, state dropout gives a large practical speed up for both parameter computation and inference. For \u03bb = 0.5 we get a 4\u00d7 speed improvement for both, due to the reduction in possible transitions. This structured dropout is also easy to exploit on GPU, as it maintains block structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scaling HMMs",
"sec_num": "4"
},
{
"text": "Emission Blocks The model requires partitioning token types into blocks X m . While there are many partitioning methods, a natural choice is Brown clusters (Brown et al., 1992; Liang, 2005) which are also based on HMMs. Brown clusters are obtained by assigning every token type in X a state in an HMM, then merging states until a desired number of partitions M is reached. We construct the Brown clusters on the training portions of the datasets and assume the vocabulary remains identical at test time (with OOV words mapped to unk). We include more background on Brown Clusters in the appendix. State Dropout We use a dropout rate of \u03bb = 0.5 at training time. For each block of size |X m |, we sample \u03bb|X m | states to use in that block each batch. We draw states from each block from a multivariate hypergeometric distribution using the Gumbel Topk trick for sampling without replacement (Vieira, 2014) . At test time we do not use state dropout. Datasets We evaluate on the PENN TREEBANK (Marcus et al., 1993 ) (929k train tokens, 10k vocab) and WIKITEXT2 ) (2M train tokens, 33k vocab) datasets. For PENN TREE-BANK we use the preprocessing from Mikolov et al. (2011) , which lowercases all words and substitutes OOV words with unks. We insert EOS tokens after each sentence. For WIKITEXT2 casing is preserved, and all OOV words are unked. We insert EOS tokens after each paragraph. In both datasets OOV words were included in the perplexity (as unks), and EOS was included in the perplexity as well (Merity et al., 2017) . Baselines Baselines include both state-of-the-art language models and other alternative LM styles. These include AWD-LSTM (Merity et al., 2017) ; a 900-state scalar HMM and HMM+RNN extension, which discards the HMM assumptions (Buys et al., 2018) ; a traditional Kneser-Ney 5-gram model (Mikolov and Zweig, 2012; Heafield et al., 2013) , a 256 dimension feedforward neural model, and a 2-layer 256 dimension LSTM.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Brown et al., 1992;",
"ref_id": "BIBREF2"
},
{
"start": 177,
"end": 189,
"text": "Liang, 2005)",
"ref_id": "BIBREF14"
},
{
"start": 891,
"end": 905,
"text": "(Vieira, 2014)",
"ref_id": "BIBREF24"
},
{
"start": 992,
"end": 1012,
"text": "(Marcus et al., 1993",
"ref_id": "BIBREF16"
},
{
"start": 1150,
"end": 1171,
"text": "Mikolov et al. (2011)",
"ref_id": "BIBREF21"
},
{
"start": 1504,
"end": 1525,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1650,
"end": 1671,
"text": "(Merity et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1755,
"end": 1774,
"text": "(Buys et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 1815,
"end": 1840,
"text": "(Mikolov and Zweig, 2012;",
"ref_id": "BIBREF20"
},
{
"start": 1841,
"end": 1863,
"text": "Heafield et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "We compare these with our approach: the very large neural HMM (VL-HMM). Unless otherwise noted, our model has |Z| = 2 15 total states but only considers k = 256 states at every timestep at test time with M = 128 groups. 5 The state and word embeddings as well as the MLP have a hidden dimension of 256. We train with a state dropout rate of \u03bb = 0.5. See the appendix for all hyperparameters.",
"cite_spans": [
{
"start": 220,
"end": 221,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "Param Table 1 gives the main results. On PTB, the VL-HMM is able to achieve 125.0 perplexity on the valid set, outperforming a FF baseline (159.9) and vastly outperforming the 900-state HMM from Buys et al. (2018) (284.6). 6 The VL-HMM also outperforms the HMM+RNN extension of Buys et al. (2018) (142.3). These results indicate that HMMs are a much stronger model on this benchmark than previously claimed. However, the VL-HMM is still outperformed by LSTMs which have been extensively studied for this task. This trend persists in WIKITEXT-2, with the VL-HMM outperforming the FF model but underperforming an LSTM. Figure 3 examines the effect of state size: We find that performance continuously improves significantly as we grow to 2 16 states, justifying the large state space. The marginal improvement does lower as the number of states increases, implying that the current approach may have limitations in scaling to even larger state spaces. Table 2 considers other ablations: Although neural and scalar parameterizations reach similar training perplexity, the neural model generalizes better on validation with almost 100x fewer model parameters. We find that state dropout results in both 6 Buys et al. (2018) only report validation perplexity for the HMM and HMM+RNN models, so we compare accordingly. an improvement in perplexity and a large improvement in computational speed. See the appendix for emission sparsity constraint ablations, as well as experiments on further reducing the number of parameters.",
"cite_spans": [
{
"start": 1199,
"end": 1200,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 1",
"ref_id": null
},
{
"start": 617,
"end": 625,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 950,
"end": 957,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Val Test PENN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "This work demonstrates methods for effectively scaling HMMs to large state spaces on parallel hardware, and shows that this approach results in accuracy gains compared to other HMM models. In order to scale, we introduce three techniques: a blocked emission constraint, a neural parameterization, and state dropout, which lead to an HMM that outperforms n-gram models and prior HMMs. Once scaled up to take advantage of modern hardware, very large HMMs demonstrate meaningful improvements over smaller HMMs. HMMs are a useful class of probabilistic models with different inductive biases, performance characteristics, and conditional independence structure than RNNs. Future work includes using these approaches to induce model structure, develop accurate models with better interpretability, and to apply these approaches in lower data regimes. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Brown clustering is an agglomerative clustering approach (Brown et al., 1992; Liang, 2005 ) that assigns every token type a single cluster. The Brown clustering model aims to find an HMM that maximizes the likelihood of an observed corpora under the constraint that every token type can only be emit by a single latent class. The cluster for the word is given by the latent class that emits that token type. Clusters are initialized by assigning every token type a unique latent state in an HMM. States are then merged iteratively until a desired number M is reached. Liang (2005) propose an algorithm that chooses a pair of states to merge at every iteration based on state bigram statistics within a window.",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Brown et al., 1992;",
"ref_id": "BIBREF2"
},
{
"start": 78,
"end": 89,
"text": "Liang, 2005",
"ref_id": "BIBREF14"
},
{
"start": 568,
"end": 580,
"text": "Liang (2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Brown Clustering",
"sec_num": null
},
{
"text": "For PENN TREEBANK and WIKITEXT-2, we trained the following baselines: a two layer FF 256-dim 5-gram model and a two layer 256-dim LSTM. The FF model is given by the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(w t | w <t ) = W x ReLU(W h E w (w t\u22124:t\u22121 ))",
"eq_num": "(6)"
}
],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "where E w gives the word embeddings, W h \u2208 R h\u00d74h , and W x \u2208 R |X |\u00d7h is weight-tied to the word embeddings. The LSTM model is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "p(w t | w <t ) = W x LSTM(E w (w <t )) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "with a 2-layer LSTM that has weight-tied W x and E w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "For the (5-gram) FF model we use a batch size of 128 and a bptt length of 64, as we found the model needed a larger batch size to achieve decent performance. For the LSTM, we use a batch size of 16 and a BPTT length of 32. For both baseline models we use AdamW (Loshchilov and Hutter, 2017) with a learning rate of 1e-3 and a dropout rate of 0.3 on the activations in the model. Both models use a hidden dimension of h = 256 throughout. These same hyperparameters were applied on both PENN TREEBANK and WIKITEXT-2.",
"cite_spans": [
{
"start": 261,
"end": 290,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "For the HMMs we use a batch size of 16 and a BPTT length of 32. We use state dropout with rate \u03bb = 0.5. We reset the state distribution to p(z 1 | z 0 ) after encountering the EOS symbol. We use AdamW (Loshchilov and Hutter, 2017) with a learning rate of 1e-2 for PENN TREEBANK, and a learning rate of 1e-3 for WIKITEXT-2.",
"cite_spans": [
{
"start": 201,
"end": 230,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "All weights are initialized with the Kaiming uniform initialization. The FF model was trained for 100 epochs, while all other models were trained for 50. Validation likelihood was checked 4 times per epoch, and learning rates were decayed by a factor of 4 if the validation performance did not improve after 8 consecutive checks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "Hyperparameter search was performed manually, using the best validation perplexity achieved in a run. Bounds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "1. Learning rate \u2208 {0.0001, 0.001, 0.01, 0.1} 2. Dropout \u03bb \u2208 {0, 0.25, 0.5, 0.75}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Hyperparameters",
"sec_num": null
},
{
"text": "4. Batch size \u2208 {16, 32, 64, 128}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hidden dimension h \u2208 {128, 256, 512}",
"sec_num": "3."
},
{
"text": "Experiments were run on RTX 2080 GPUs. On PTB the FF model takes 3s per epoch, the LSTM 23s, and the VLHMM 2 15 433s. The inference for VLHMM was not heavily optimized, and uses a kernel produced by TVM (Chen et al., 2018) for computing gradients through marginal inference.",
"cite_spans": [
{
"start": 203,
"end": 222,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hidden dimension h \u2208 {128, 256, 512}",
"sec_num": "3."
},
{
"text": "Let E, D \u2208 R v\u00d7h be an embedding matrix and a matrix of the same size, where v is the size of the vocab and h the hidden dimension. We use the following residual network as our MLP: with i \u2208 {out, in, emit}, W i1 , W i2 \u2208 R h\u00d7h . The state embeddings are then obtained by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f i (E) = g i (ReLU(EW i1 )) g i (D) = LayerNorm(ReLU(DW i2 ) + D)",
"eq_num": "(8)"
}
],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H out = f out (E z ) H in = f in (E z ) H emit = f emit (E z )",
"eq_num": "(9)"
}
],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "In order to reduce the number of parameters further, we experiment with factored state embeddings. We factor the state embeddings into a composition of smaller steate embeddings (E z \u2208 R |Z|\u00d7h/2 ) as well as block embeddings (E m \u2208 R |Z|\u00d7h/2 ), which are shared across all states within the same emission block, i.e. all z \u2208 Z x share a block embedding. To compose these embeddings, we introduce new residual networks f j , j \u2208 {o, i, e} similar to the above, yielding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H out = f out (f o ([E m , E z ])) H in = f in (f i ([E m , E z ])) H emit = f emit (f e ([E m , E z ]))",
"eq_num": "(10)"
}
],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "We ablate the factored state embeddings in Sec. A.5. Table 3 shows the results from emission constraint ablations. With a VL-HMM that has |Z| = 2 14 states, the model is insensitive to the number of blocks M explorable given computational constraints. However, with fewer states |Z| = 2 10 we are able to explore a lower number of blocks. With M = 4 blocks, the block-sparse HMM matches an unconstrained HMM with the same number of states. When M = 8, the block-sparse model underperforms, implying there may be room for improvement with the larger HMMs that use M > 8 blocks.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A.3 HMM Parameterization",
"sec_num": null
},
{
"text": "We additionally compare the blocks induced by Brown clustering with a uniform constraint that samples subsets of states of size n independently and uniformly from Z. This does not admit a partitioning, which makes it difficult to apply state dropout. We therefore zero out half of the columns of the transition matrix randomly before normalization. In the bottom of Table 3 , we find that models with uniform constraints are consistently outperformed by models with Brown cluster constraints as measured by validation perplexity. The models with uniform constraints also have poor validation performance despite better training performance, a symptom of overfitting.",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "A.4 Emission Constraint Ablation",
"sec_num": null
},
{
"text": "These ablations demonstrate that the constraints based on Brown clusters used in this work may not be optimal, motivating future work that learns sparsity structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Emission Constraint Ablation",
"sec_num": null
},
{
"text": "We examine the effect of factoring state representations into block embeddings and independent state embeddings. The results of the factored state ablation are in Figure 4 . We find that the performance of independent state embeddings with is similar to a model with factored embeddings, but performs slightly worse in perplexity.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "A.5 Factored State Representation Ablation",
"sec_num": null
},
{
"text": "In Table 4 we see that although the factored state embeddings reduce the total number of parameters, the computation time and perplexity both get worse.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "A.5 Factored State Representation Ablation",
"sec_num": null
},
{
"text": "We reproduce the technique ablation table in Table 4 for reference. As we remove neural components, the number of parameters increases but the time of the forward pass decreases. This is because generating parameters from a neural network takes strictly more time than having those parameters available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.6 Computational Considerations",
"sec_num": null
},
{
"text": "When block embeddings are removed and the full state representations are directly parameterized, the model is faster due to not needing to recompute the full state representations. This contrast is even more pronounced when removing neural components altogether and using a scalar parameterization, with an almost 3x speedup. This is because the distributional parameters do not need to be regenerated by a neural network if they are parameterized directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.6 Computational Considerations",
"sec_num": null
},
{
"text": "Other work has used neural parameterization for structured models, such as dependency models(Han et al., 2017), hidden semi-Markov models(Wiseman et al., 2018), and context free grammars(Kim et al., 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Quasi-RNNs also have a (parallel) logarithmic dependency on T by applying the same prefix-sum trick, but do not model uncertainty over latent dynamics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This can be sped up on a parallel machine to O(log(T )k 2 ) via a binary reduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As an optimization, one could only compute the nonzero emission matrix blocks saving space and time. In practice we compute the full matrix as in the equation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The 256 dim FF, LSTM, and VL-HMM in particular have comparable computational complexity: O(256 2 T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Yuntian Deng, Daniel Fried, Yao Fu, Yoon Kim, Victor Sanh, Sam Wiseman, and Jiawei Zhou for insightful conversations and suggestions. This work is supported by CAREER 2037519 and NSF III 1901030.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. J. Mach. Learn. Res., 3:1137-1155.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Quasi-recurrent neural networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural net- works. CoRR, abs/1611.01576.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Comput. Linguist",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Comput. Linguist., 18(4):467-479.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bridging hmms and rnns through architectural transformations",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Buys, Yonatan Bisk, and Yejin Choi. 2018. Bridg- ing hmms and rnns through architectural transforma- tions.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "TVM: end-to-end optimization stack for deep learning",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Moreau",
"suffix": ""
},
{
"first": "Ziheng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Haichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Eddie",
"middle": [
"Q"
],
"last": "Yan",
"suffix": ""
},
{
"first": "Leyuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuwei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Ceze",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Q. Yan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: end-to-end optimization stack for deep learning. CoRR, abs/1802.04799.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning higher-order sequential structure with cloned hmms",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Dedieu",
"suffix": ""
},
{
"first": "Nishad",
"middle": [],
"last": "Gothoskar",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Swingle",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lehrach",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "L\u00e1zaro-Gredilla",
"suffix": ""
},
{
"first": "Dileep",
"middle": [],
"last": "George",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Dedieu, Nishad Gothoskar, Scott Swingle, Wolfgang Lehrach, Miguel L\u00e1zaro-Gredilla, and Dileep George. 2019. Learning higher-order sequen- tial structure with cloned hmms.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dependency grammar induction with neural lexicalization and big training data",
"authors": [
{
"first": "Wenjuan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1683--1688",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Wenjuan Han, Yong Jiang, and Kewei Tu. 2017. De- pendency grammar induction with neural lexicaliza- tion and big training data. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1683-1688, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "690--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 690-696, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modeling Dependencies in Natural Languages with Latent Variables",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang. 2011. Modeling Dependencies in Natural Languages with Latent Variables. Ph.D. the- sis, University of Maryland.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discrete latent variable representations for low-resource text classification",
"authors": [
{
"first": "Shuning",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4831--4842",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.437"
]
},
"num": null,
"urls": [],
"raw_text": "Shuning Jin, Sam Wiseman, Karl Stratos, and Karen Livescu. 2020. Discrete latent variable representa- tions for low-resource text classification. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4831- 4842, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Compound probabilistic context-free grammars for grammar induction. CoRR, abs",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 1906,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Chris Dyer, and Alexander M. Rush. 2019. Compound probabilistic context-free grammars for grammar induction. CoRR, abs/1906.10225.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Increasing the interpretability of recurrent neural networks using hidden markov models",
"authors": [
{
"first": "Viktoriya",
"middle": [],
"last": "Krakovna",
"suffix": ""
},
{
"first": "Finale",
"middle": [],
"last": "Doshi-Velez",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viktoriya Krakovna and Finale Doshi-Velez. 2016. In- creasing the interpretability of recurrent neural net- works using hidden markov models.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ergodic hidden markov models and polygrams for language modeling",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "Niemann",
"suffix": ""
},
{
"first": "Ernst",
"middle": [
"G\u00fcnter"
],
"last": "Schukat-Talamazzini",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "357--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Kuhn, Heinrich Niemann, and Ernst G\u00fcnter Schukat-Talamazzini. 1994. Ergodic hidden markov models and polygrams for language modeling. pages 357-360.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parallel prefix computation",
"authors": [
{
"first": "Richard",
"middle": [
"E"
],
"last": "Ladner",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Fischer",
"suffix": ""
}
],
"year": 1980,
"venue": "J. ACM",
"volume": "27",
"issue": "4",
"pages": "831--838",
"other_ids": {
"DOI": [
"10.1145/322217.322232"
]
},
"num": null,
"urls": [],
"raw_text": "Richard E. Ladner and Michael J. Fischer. 1980. Paral- lel prefix computation. J. ACM, 27(4):831-838.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semi-supervised learning for natural language",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "MASTER'S THESIS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2005. Semi-supervised learning for natu- ral language. In MASTER'S THESIS, MIT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fixing weight decay regularization in adam",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. CoRR, abs/1711.05101.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Tagging English text with a probabilistic model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155-171.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Regularizing and optimizing LSTM language models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing LSTM language models. CoRR, abs/1708.02182.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. CoRR, abs/1609.07843.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Context dependent recurrent neural network language model",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "234--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mikolov and G. Zweig. 2012. Context depen- dent recurrent neural network language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 234-239.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Empirical evaluation and combination of advanced language modeling techniques",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Deoras",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kombrink",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "605--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Anoop Deoras, Stefan Kombrink, Luk\u00e1s Burget, and Jan Cernock\u00fd. 2011. Empirical evaluation and combination of advanced language modeling techniques. pages 605-608.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {
"DOI": [
"10.3115/1220175.1220230"
]
},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. page 433-440.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unsupervised neural hidden markov models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke M. Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, and Kevin Knight. 2016. Unsupervised neu- ral hidden markov models. CoRR, abs/1609.09007.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Gumbel-max trick and weighted reservoir sampling",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Vieira. 2014. Gumbel-max trick and weighted reservoir sampling.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "HMM Training (a single batch) Given: block structure and model parameters Sample block-wise dropout mask b Compute A, O ignoring b z = 0 for all examples x in batch do Compute log p(x; A, O) Compute grad wrt parameters of log p(x) Update model parameters E z , E x and MLP",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Perplexity on PTB by state size |Z| (\u03bb = 0.5 and M = 128).",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Perplexity on PTB by number of blocks M (\u03bb = 0.5 and |Z| = 2 14",
"type_str": "figure"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>: Ablations on PTB (\u03bb = 0.5 and M = 128)</td></tr><tr><td>with a smaller model |Z| = 2 14 . Time is ms per</td></tr><tr><td>eval batch (Run on RTX 2080). Ablations were per-</td></tr><tr><td>formed independently, removing a single component</td></tr><tr><td>per row. Removing the neural parameterization results</td></tr><tr><td>in a scalar parameterization.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "Emission constraint ablations on PENN TREE-BANK. |Z| is the size of the hidden space, k is the size number of hidden states in each block, and M is the number of blocks.",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table/>",
"text": "Ablations on PTB (\u03bb = 0.5 and M = 128). Param is the number of parameters, while train and val give the corresponding perplexities. Time is ms per eval batch (Run on RTX 2080).",
"num": null,
"type_str": "table"
}
}
}
}