Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:36:05.467056Z"
},
"title": "Latent Tree Language Model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Brychc\u00edn",
"suffix": "",
"affiliation": {
"laboratory": "NTIS -New Technologies for the Information Society",
"institution": "University of West Bohemia",
"location": {
"addrLine": "Technick\u00e1 8",
"postCode": "306 14",
"settlement": "Plze\u0148",
"country": "Czech Republic"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we introduce Latent Tree Language Model (LTLM), a novel approach to language modeling that encodes syntax and semantics of a given sentence as a tree of word roles. The learning phase iteratively updates the trees by moving nodes according to Gibbs sampling. We introduce two algorithms to infer a tree for a given sentence. The first one is based on Gibbs sampling. It is fast, but does not guarantee to find the most probable tree. The second one is based on dynamic programming. It is slower, but guarantees to find the most probable tree. We provide comparison of both algorithms. We combine LTLM with 4-gram Modified Kneser-Ney language model via linear interpolation. Our experiments with English and Czech corpora show significant perplexity reductions (up to 46% for English and 49% for Czech) compared with standalone 4-gram Modified Kneser-Ney language model.",
"pdf_parse": {
"paper_id": "D16-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we introduce Latent Tree Language Model (LTLM), a novel approach to language modeling that encodes syntax and semantics of a given sentence as a tree of word roles. The learning phase iteratively updates the trees by moving nodes according to Gibbs sampling. We introduce two algorithms to infer a tree for a given sentence. The first one is based on Gibbs sampling. It is fast, but does not guarantee to find the most probable tree. The second one is based on dynamic programming. It is slower, but guarantees to find the most probable tree. We provide comparison of both algorithms. We combine LTLM with 4-gram Modified Kneser-Ney language model via linear interpolation. Our experiments with English and Czech corpora show significant perplexity reductions (up to 46% for English and 49% for Czech) compared with standalone 4-gram Modified Kneser-Ney language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language modeling is one of the core disciplines in natural language processing (NLP). Automatic speech recognition, machine translation, optical character recognition, and other tasks strongly depend on the language model (LM). An improvement in language modeling often leads to better performance of the whole task. The goal of language modeling is to determine the joint probability of a sentence. Currently, the dominant approach is n-gram language modeling, which decomposes the joint probability into the product of conditional probabilities by using the chain rule. In traditional n-gram LMs the words are represented as distinct symbols. This leads to an enormous number of word combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the last years many researchers have tried to capture words contextual meaning and incorporate it into the LMs. Word sequences that have never been seen before receive high probability when they are made of words that are semantically similar to words forming sentences seen in training data. This ability can increase the LM performance because it reduces the data sparsity problem. In NLP a very common paradigm for word meaning representation is the use of the Distributional hypothesis. It suggests that two words are expected to be semantically similar if they occur in similar contexts (they are similarly distributed in the text) (Harris, 1954) . Models based on this assumption are denoted as distributional semantic models (DSMs).",
"cite_spans": [
{
"start": 640,
"end": 654,
"text": "(Harris, 1954)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, semantically motivated LMs have begun to surpass the ordinary n-gram LMs. The most commonly used architectures are neural network LMs (Bengio et al., 2003; Mikolov et al., 2010; Mikolov et al., 2011) and class-based LMs. Classbased LMs are more related to this work thus we investigate them deeper. Brown et al. (1992) introduced class-based LMs of English. Their unsupervised algorithm searches classes consisting of words that are most probable in the given context (one word window in both directions). However, the computational complexity of this algorithm is very high. This approach was later extended by (Martin et al., 1998; Whit-taker and Woodland, 2003) to improve the complexity and to work with wider context. Deschacht et al. (2012) used the same idea and introduced Latent Words Language Model (LWLM), where word classes are latent variables in a graphical model. They apply Gibbs sampling or the expectation maximization algorithm to discover the word classes that are most probable in the context of surrounding word classes. A similar approach was presented in (Brychc\u00edn and Konop\u00edk, 2014; Brychc\u00edn and Konop\u00edk, 2015) , where the word clusters derived from various semantic spaces were used to improve LMs.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 166,
"end": 187,
"text": "Mikolov et al., 2010;",
"ref_id": "BIBREF20"
},
{
"start": 188,
"end": 209,
"text": "Mikolov et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 309,
"end": 328,
"text": "Brown et al. (1992)",
"ref_id": "BIBREF3"
},
{
"start": 622,
"end": 643,
"text": "(Martin et al., 1998;",
"ref_id": "BIBREF18"
},
{
"start": 644,
"end": 674,
"text": "Whit-taker and Woodland, 2003)",
"ref_id": null
},
{
"start": 733,
"end": 756,
"text": "Deschacht et al. (2012)",
"ref_id": "BIBREF9"
},
{
"start": 1089,
"end": 1117,
"text": "(Brychc\u00edn and Konop\u00edk, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 1118,
"end": 1145,
"text": "Brychc\u00edn and Konop\u00edk, 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In above mentioned approaches, the meaning of a word is inferred from the surrounding words independently of their relation. An alternative approach is to derive contexts based on the syntactic relations the word participates in. Such syntactic contexts are automatically produced by dependency parse-trees. Resulting word representations are usually less topical and exhibit more functional similarity (they are more syntactically oriented) as shown in (Pad\u00f3 and Lapata, 2007; Levy and Goldberg, 2014) .",
"cite_spans": [
{
"start": 454,
"end": 477,
"text": "(Pad\u00f3 and Lapata, 2007;",
"ref_id": "BIBREF23"
},
{
"start": 478,
"end": 502,
"text": "Levy and Goldberg, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency-based methods for syntactic parsing have become increasingly popular in NLP in the last years (K\u00fcbler et al., 2009) . Popel and Mare\u010dek (2010) showed that these methods are promising direction of improving LMs. Recently, unsupervised algorithms for dependency parsing appeared in (Headden III et al., 2009; Cohen et al., 2009; Spitkovsky et al., 2010; Spitkovsky et al., 2011; Mare\u010dek and Straka, 2013) offering new possibilities even for poorly-resourced languages.",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(K\u00fcbler et al., 2009)",
"ref_id": "BIBREF13"
},
{
"start": 129,
"end": 153,
"text": "Popel and Mare\u010dek (2010)",
"ref_id": "BIBREF24"
},
{
"start": 291,
"end": 317,
"text": "(Headden III et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 318,
"end": 337,
"text": "Cohen et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 338,
"end": 362,
"text": "Spitkovsky et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 363,
"end": 387,
"text": "Spitkovsky et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 388,
"end": 413,
"text": "Mare\u010dek and Straka, 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we introduce a new DSM that uses tree-based context to create word roles. The word role contains the words that are similarly distributed over similar tree-based contexts. The word role encodes the semantic and syntactic properties of a word. We do not rely on parse trees as a prior knowledge, but we jointly learn the tree structures and word roles. Our model is a soft clustering, i.e. one word may be present in several roles. Thus it is theoretically able to capture the word polysemy. The learned structure is used as a LM, where each word role is conditioned on its parent role. We present the unsupervised algorithm that discovers the tree structures only from the distribution of words in a training corpus (i.e. no labeled data or external sources of in-formation are needed). In our work we were inspired by class-based LMs (Deschacht et al., 2012) , unsupervised dependency parsing (Mare\u010dek and Straka, 2013) , and tree-based DSMs (Levy and Goldberg, 2014) .",
"cite_spans": [
{
"start": 848,
"end": 872,
"text": "(Deschacht et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 907,
"end": 933,
"text": "(Mare\u010dek and Straka, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 956,
"end": 981,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows. We start with the definition of our model (Section 2). The process of learning the hidden sentence structures is explained in Section 3. We introduce two algorithms for searching the most probable tree for a given sentence (Section 4). The experimental results on English and Czech corpora are presented in Section 6. We conclude in Section 7 and offer some directions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we describe Latent Tree Language Model (LTLM). LTLM is a generative statistical model that discovers the tree structures hidden in the text corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "Let L be a word vocabulary with total of |L| distinct words. Assume we have a training corpus w divided into S sentences. The goal of LTLM or other LMs is to estimate the probability of a text P (w). Let N s denote the number of words in the s-th sentence. The s-th sentence is a sequence of words w s = {w s,i } Ns i=0 , where w s,i \u2208 L is a word at position i in this sentence and w s,0 = < s > is an artificial symbol that is added at the beginning of each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "Each sentence s is associated with the dependency graph G s . We define the dependency graph as a labeled directed graph, where nodes correspond to the words in the sentence and there is a label for each node that we call role. Formally, it is a triple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "G s = (V s , E s , r s ) consisting of: \u2022 The set of nodes V s = {0, 1, ..., N s }. Each token w s,i is associated with node i \u2208 V s . \u2022 The set of edges E s \u2286 V s \u00d7 V s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "\u2022 The sequence of roles",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "r s = {r s,i } Ns i=0 , where 1 \u2264 r s,i \u2264 K for i \u2208 V s . K is the number of roles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "The artificial word w s,0 = < s > at the beginning of the sentence has always role 1 (r s,0 = 1). Analogously to w, the sequence of all r s is denoted as r and sequence of all G s as G. Edge e \u2208 E s is an ordered pair of nodes (i, j). We say that i is the head or the parent and j is the dependent or the child. We use the notation i \u2192 j for such edge. The directed path from node i to node j is denoted as i * \u2192 j. We place a few constraints on the graph G s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "\u2022 The graph G s is a tree. It means it is the acyclic graph (if i \u2192 j then not j * \u2192 i), where each node has one parent (if i \u2192 j then not k \u2192 j for every k = i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "\u2022 The graph G s is projective (there are no cross edges). For each edge (i, j) and for each k between i and j (i.e. i < k < j or i > k > j) there must exist the directed path i * \u2192 k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "\u2022 The graph G s is always rooted in the node 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "We denote these graphs as the projective dependency trees. Example of such a tree is on Figure 1 . For the tree G s we define a function",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h s (j) = i, when (i, j) \u2208 E s",
"eq_num": "(1)"
}
],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "that returns the parent for each node except the root. We use graph G s as a representation of the Bayesian network with random variables E s and r s . The roles r s,i represent the node labels and the edges express the dependences between the roles. The conditional probability of the role at position i given its parent role is denoted as P (r s,i |r s,hs(i) ). The conditional probability of the word at position i in the sentence given its role r s,i is denoted as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "P (w s,i |r s,i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "We model the distribution over words in the sentence s as the mixture",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "P (w s ) = P (w s |r s,0 ) = Ns i=1 K k=1 P (w s,i |r s,i = k)P (r s,i = k|r s,hs(i) ). (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "The root role is kept fixed for each sentence (r s,0 = 1) so P (w s ) = P (w s |r s,0 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "We look at the roles as mixtures over child roles and simultaneously as mixtures over words. We can represent dependency between roles with a set of K multinomial distributions \u03b8 over K roles, such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "P (r s,i |r s,hs(i) = k) = \u03b8 (k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "r s,i . Simultaneously, dependency of words on their roles can be represented as a set of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "K multinomial distributions \u03c6 over |L| words, such that P (w s,i |r s,i = k) = \u03c6 (k) w s,i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "To make predictions about new sentences, we need to assume a prior distribution on the parameters \u03b8 (k) and \u03c6 (k) .",
"cite_spans": [
{
"start": 110,
"end": 113,
"text": "(k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "We place a Dirichlet prior D with the vector of K hyper-parameters \u03b1 on a multinomial distribution \u03b8 (k) \u223c D(\u03b1) and with the vector of |L| hyperparameters \u03b2 on a multinomial distribution \u03c6 (k) \u223c D(\u03b2). In general, D is not restricted to be Dirichlet distribution. It could be any distribution over discrete children, such as logistic normal. In this paper, we focus only on Dirichlet as a conjugate prior to the multinomial distribution and derive the learning algorithm under this assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "The choice of the child role depends only on its parent role, i.e. child roles with the same parent are mutually independent. This property is especially important for the learning algorithm (Section 3) and also for searching the most probable trees (Section 4). We do not place any assumption on the length of the sentence N s or on how many children the parent node is expected to have.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Tree Language Model",
"sec_num": "2"
},
{
"text": "In this section we present the learning algorithm for LTLM. The goal is to estimate \u03b8 and \u03c6 in a way that maximizes the predictive ability of the model (generates the corpus with maximal joint probability P (w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "Let \u03c7 k (i,j) be an operation that changes the tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G s to G s \u03c7 k (i,j) : G s \u2192 G s ,",
"eq_num": "(3)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "such that the newly created tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "G (V s , E s , r s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "consists of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "\u2022 V s = V s . \u2022 E s = (E s \\ {(h s (i), i)}) \u222a {(j, i)}. \u2022 r s,a = r s,a for a = i k for a = i , where 0 \u2264 a \u2264 N s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "It means that we change the role of the selected node i so that r s,i = k and simultaneously we change the parent of this node to be j. We call this operation a partial change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "The newly created graph G must satisfy all conditions presented in Section 2, i.e. it is a projective dependency tree rooted in the node 0. Thus not all partial changes \u03c7 k (i,j) are possible to perform on graph G s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "Clearly, for the sentence s there is at most",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "Ns(1+Ns) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "parent changes 1 . To estimate the parameters of LTLM we apply Gibbs sampling and gradually sample \u03c7 k (i,j) for trees G s . For doing so we need to determine the posterior predictive distribution 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G s \u223c P (\u03c7 k (i,j) (G s )|w, G),",
"eq_num": "(4)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "from which we will sample partial changes to update the trees. In the equation, G denote the sequence of all trees for given sentences w and G s is a result of one sampling. In the following text we derive this equation under assumptions from Section 2. The posterior predictive distribution of Dirichlet multinomial has the form of additive smoothing that is well known in the context of language modeling. The hyper-parameters of Dirichlet prior determine how much is the predictive distribution smoothed. Thus the predictive distribution for the word-in-role distribution can be expressed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w s,i |r s,i , w \\s,i , r \\s,i ) = n (w s,i |r s,i ) \\s,i + \u03b2 n (\u2022|r s,i ) \\s,i + |L| \u03b2 ,",
"eq_num": "(5)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "1 The most parent changes are possible for the special case of the tree, where each node i has parent i \u2212 1. Thus for each node i we can change its parent to any node j < i and keep the projectivity of the tree. That is Ns(1+Ns) where n",
"cite_spans": [
{
"start": 220,
"end": 228,
"text": "Ns(1+Ns)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "(w s,i |r s,i ) \\s,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "is the number of times the role r s,i has been assigned to the word w s,i , excluding the position i in the s-th sentence. The symbol \u2022 represents any word in the vocabulary so that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "n (\u2022|r s,i ) \\s,i = l\u2208L n (l|r s,i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "\\s,i . We use the symmetric Dirichlet distribution for the word-in-role probabilities as it could be difficult to estimate the vector of hyper-parameters \u03b2 for large word vocabulary. In the above mentioned equation, \u03b2 is a scalar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "The predictive distribution for the role-by-role distribution is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r s,i |r s,hs(i) , r \\s,i = n (r s,i |r s,hs(i) ) \\s,i + \u03b1 r s,i n (\u2022|r s,hs(i) ) \\s,i + K k=1 \u03b1 k .",
"eq_num": "(6)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "Analogously to the previous equation,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "n (r s,i |r s,hs(i) ) \\s,i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "denote the number of times the role r s,i has the parent role r s,hs(i) , excluding the position i in the s-th sentence. The symbol \u2022 represents any possible role to make the probability distribution summing up to 1. We assume an asymmetric Dirichlet distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "We can use predictive distributions of above mentioned Dirichlet multinomials to express the joint probability that the role at position i is k (r s,i = k) with parent at position j conditioned on current values of all variables, except those in position i in the sentence s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (r s,i = k, j|w, r \\s,i ) \u221d P (w s,i |r s,i = k, w \\s,i , r \\s,i ) \u00d7 P (r s,i = k|r s,j , r \\s,i ) \u00d7 a:hs(a)=i P (r s,a |r s,i = k, r \\s,i ).",
"eq_num": "(7)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "The choice of the node i role affects the word that is produced by this role and also all the child roles of the node i. Simultaneously, the role of the node i depends on its parent j role. Formula 7 is derived from the joint probability of a sentence s and a tree G s , where all probabilities which do not depend on the choice of the role at position i are removed and equality is replaced by proportionality (\u221d). We express the final predictive distribution for sampling partial changes \u03c7 k (i,j) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (\u03c7 k (i,j) (G s )|w, G) \u221d P (r s,i = k, j|w, r \\s,i ) P (r s,i , h s (i)|w, r \\s,i )",
"eq_num": "(8)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "that is essentially the fraction between the joint probability of r s,i and its parent after the partial change and before the partial change (conditioned on all other variables). This fraction can be interpreted as the necessity to perform this partial change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "We investigate two strategies of sampling partial changes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "\u2022 Per sentence: We sample a single partial change according to Equation 8 for each sentence in the training corpus. It means during one pass through the corpus (one iteration) we perform S partial changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "\u2022 Per position: We sample a partial change for each position in each sentence. We perform in total N = S s=1 N s partial changes during one pass. Note that the denominator in Equation 8 is constant for this strategy and can be removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "We compare both training strategies in Section 6. After enough training iterations, we can estimate the conditional probabilities \u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(k) l and \u03b8 (p) k from actual samples as \u03c6 (k) l \u2248 n (w s,i =l|r s,i =k) + \u03b2 n (\u2022|r s,i =k) + |L| \u03b2 (9) \u03b8 (p) k \u2248 n (r s,i =k|r s,hs(i) =p) + \u03b1 k n (\u2022|r s,hs(i) =p) + K m=1 \u03b1 m .",
"eq_num": "(10)"
}
],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "These equations are similar to equations 5 and 6, but here the counts n do not exclude any position in a corpus. Note that in the Gibbs sampling equation, we assume that the Dirichlet parameters \u03b1 and \u03b2 are given. We use a fixed point iteration technique described in (Minka, 2003) to estimate them.",
"cite_spans": [
{
"start": 268,
"end": 281,
"text": "(Minka, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Estimation",
"sec_num": "3"
},
{
"text": "In this section we present two approaches for searching the most probable tree for a given sentence assuming we have already estimated the parameters \u03b8 and \u03c6. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference",
"sec_num": "4"
},
{
"text": "We use the same sampling technique as for estimating parameters (Equation 8), i.e. we iteratively sample the partial changes \u03c7 k (i,j) . However, we use equations 9 and 10 for predictive distributions of Dirichlet multinomials instead of 5 and 6. In fact, these equations correspond to the predictive distributions over the newly added word w s,i with the role r s,i into the corpus, conditioned on w and r. This sampling technique rarely finds the best solution, but often it is very near.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-deterministic Inference",
"sec_num": "4.1"
},
{
"text": "Here we present the deterministic algorithm that guarantees to find the most probable tree for a given sentence. We were inspired by Cocke-Younger-Kasami (CYK) algorithm (Lange and Lei\u00df, 2009) .",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Lange and Lei\u00df, 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "Let T n s,a,c denote the subtree of G s (subgraph of G s that is also a tree) containing subsequence of nodes {a, a + 1, ..., c}. The superscript n denotes the number of children the root of this subtree has. We denote the joint probability of a subtree from position a to position c with the corresponding words conditioned by the root role k as P n ({w s,i } c i=a , T n s,a,c |k). Our goal is to find the tree G s = T 1+ s,0,Ns that maximizes probability P (w s , G s ) = P 1+ ({w s,i } Ns i=0 , T 1+ s,0,Ns |0). Similarly to CYK algorithm, our approach fol-lows bottom-up direction and goes through all possible subsequences for a sentence (sequence of words). At the beginning, the probabilities for subsequences of length 1 (i.e. single words) are calculated as P 1+ ({w s,a }, T 1+ s,a,a |k) = P (w s,a |r s,a = k). Once it has considered subsequences of length 1, it goes on to subsequences of length 2, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "Thanks to mutual independence of roles under the same parent, we can find the most probable subtree with the root role k and with at least two root children according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "P 2+ ({w s,i } c i=a , T 2+ s,a,c |k) = max b:a<b<c [P 1+ ({w s,i } b i=a , T 1+ s,a,b |k)\u00d7 P 1+ ({w s,i } c i=b+1 , T 1+ s,b+1,c |k)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": ". (11) It means we merge two neighboring subtrees with the same root role k. This is the reason why the new subtree has at least two root children. This formula is visualized on Figure 2a . Unfortunately, this does not cover all subtree cases. We find the most probable tree with only root child as follows Figure 2b .",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 187,
"text": "Figure 2a",
"ref_id": "FIGREF3"
},
{
"start": 307,
"end": 316,
"text": "Figure 2b",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "P 1 ({w s,i } c i=a , T 1 s,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "P 1+ ({w s,i } b\u22121 i=a , T 1+ s,a,b\u22121 |m)\u00d7 P 1+ ({w s,i } c i=b+1 , T 1+ s,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "To find the most probable subtree no matter how many children the root has, we need to take the maximum from both mentioned equations P 1+ = max(P 2+ , P 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "The algorithm has complexity O(N 3 s K 2 ), i.e. it has cubic dependence on the length of the sentence N s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deterministic Inference",
"sec_num": "4.2"
},
{
"text": "Until now, we presented LTLM in its simplified version. In role-by-role probabilities (role conditioned on its parent role) we did not distinguish whether the role is on the left side or the right side of the parent. However, this position keeps important information about the syntax of words (and their roles).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side-dependent LTLM",
"sec_num": "5"
},
{
"text": "We assume separate multinomial distributions\u03b8 for roles that are on the left and\u03b8 for roles on the right. Each of them has its own Dirichlet prior with hyper-parameters\u03b1 and\u03b1, respectively. The process of estimating LTLM parameters is almost the same. The only difference is that we need to redefine the predictive distribution for the role-by-role distribution (Equation 6) to include only counts of roles on the appropriate side. Also, every time the role-by-role probability is used we need to distinguish sides:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side-dependent LTLM",
"sec_num": "5"
},
{
"text": "P (r s,i |r s,hs(i) ) = \u03b8 (r s,hs(i) ) r s,i for i < h s (i)) \u03b8 (r s,hs(i) ) r s,i for i > h s (i)) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side-dependent LTLM",
"sec_num": "5"
},
{
"text": "(13) In the following text we always assume the sidedependent LTLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Side-dependent LTLM",
"sec_num": "5"
},
{
"text": "In this section we present experiments with LTLM on two languages, English (EN) and Czech (CS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "As a training corpus we use CzEng 1.0 (Bojar et al., 2012) of the sentence-parallel Czech-English corpus. We choose this corpus because it contains multiple domains, it is of reasonable length, and it is parallel so we can easily provide comparison between both languages. The corpus is divided into 100 similarly-sized sections. We use parts 0-97 for training, the part 98 as a development set, and the last part 99 for testing.",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "(Bojar et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "We have removed all sentences longer than 30 words. The reason was that the complexity of the learning phase and the process of searching most probable trees depends on the length of sentences. It has led to removing approximately a quarter of all sentences. The corpus is available in a tokenized form so the only preprocessing step we use is lowercasing. We keep the vocabulary of 100,000 most frequent words in the corpus for both languages. The less frequent words were replaced by the symbol <unk>. Statistics for the final corpora are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 557,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "We measure the quality of LTLM by perplexity that is the standard measure used for LMs. Perplexity is a measure of uncertainty. The lower perplexity means the better predictive ability of the LM. During the process of parameter estimation we measure the perplexity of joint probability of sentences and their trees defined as PPX (P (w, G) ",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 339,
"text": "(P (w, G)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": ") = N 1 P (w,G)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": ", where N is the number of all words in the training data w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "As we describe in Section 3, there are two approaches for the parameter estimation of LTLM. During our experiments, we found that the perposition strategy of training has the ability to converge faster, but to a worse solution compared to the per-sentence strategy which converges slower, but to a better solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "We train LTLM by 500 iterations of the perposition sampling followed by another 500 iterations of the per-sentence sampling. This proves to be effi- cient in both aspects, the reasonable speed of convergence and the satisfactory predictive ability of the model. The learning curves are showed on Figure 3. We present the models with 10, 20, 50, 100, 200, 500, and 1000 roles. The higher role cardinality models were not possible to create because of the very high computational requirements. Similarly to the training of LTLM, the non-deterministic inference uses 100 iterations of per-position sampling followed by 100 iterations of per-sentence sampling.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 302,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "In the following experiments we measure how well LTLM generalizes the learned patterns, i.e. how well it works on the previously unseen data. Again, we measure the perplexity, but of probability P (w) for mutual comparison with different LMs that are based on different architectures (PPX(P (w)) = N 1 P (w) ). To show the strengths of LTLM we compare it with several state-of-the-art LMs. We experiment with Modified Kneser-Ney (MKN) interpolation (Chen and Goodman, 1998) , with Recurrent Neural Network LM (RNNLM) (Mikolov et al., 2010; Mikolov et al., 2011) 3 , and with LWLM (Deschacht et al., 2012) 4 . We have also created syntactic dependency tree based LM (denoted as STLM MST parser (McDonald et al., 2005) . We use the same architecture as for LTLM and experiment with two approaches to represent the roles. Firstly, the roles are given by the part-of-speech tag (denoted as PoS STLM). No training is required, all information come from CzEng corpus. Secondly, we learn the roles using the same algorithm as for LTLM. The only difference is that the trees are kept unchanged. Note that both deterministic and non-deterministic inference perform almost the same in this model so we do not distinguish between them. We combine baseline 4-gram MKN model with other models via linear combination (in the tables denoted by the symbol +) that is simple but very efficient technique to combine LMs. Final probability is then expressed as",
"cite_spans": [
{
"start": 449,
"end": 473,
"text": "(Chen and Goodman, 1998)",
"ref_id": "BIBREF6"
},
{
"start": 517,
"end": 539,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF20"
},
{
"start": 540,
"end": 563,
"text": "Mikolov et al., 2011) 3",
"ref_id": null
},
{
"start": 693,
"end": 716,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "P (w) = S s=1 Ns i=1 \u03bbP LM1 + (\u03bb \u2212 1) P LM2 . (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "In the case of MKN the probability P MKN is the probability of a word w s,i conditioned by 3 previous words with MKN smoothing. For LTLM or STLM this probability is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "P LTLM (w s,i |r s,hs(i) ) = K k=1 P (w s,i |r s,i = k)P (r s,i = k|r s,hs(i) ). (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "We use the expectation maximization algorithm (Dempster et al., 1977) for the maximum likelihood estimate of \u03bb parameter on the development part of the corpus. The influence of the number of roles on the perplexity is shown in Table 3 Table 4 : Ten most probable word substitutions on each position in the sentence \"Everything has beauty, but not everyone sees it.\" produced by 1000 roles LTLM with the deterministic inference.",
"cite_spans": [
{
"start": 46,
"end": 69,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 235,
"end": 242,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "cantly outperformed STLM where the syntactic dependency trees were provided as a prior knowledge. The joint learning of syntax and semantics of a sentence proved to be more suitable for predicting the words. An in-depth analysis of semantic and syntactic properties of LTLM is beyond the scope of this paper. For better insight into the behavior of LTLM, we show the most probable word substitutions for one selected sentence (see Table 4 ). We can see that the original words are often on the front positions. Also it seems that LTLM is more syntactically oriented, which confirms claims from (Levy and Goldberg, 2014; Pad\u00f3 and Lapata, 2007) , but to draw such conclusions a deeper analysis is required. The properties of the model strongly depends on the number of distinct roles. We experimented with maximally 1000 roles. To catch the meaning of various words in natural language, more roles may be needed. However, with our current implementation, it was intractable to train LTLM with more roles in a reasonable time. Training 1000 roles LTLM took up to two weeks on a powerful computational unit.",
"cite_spans": [
{
"start": 594,
"end": 619,
"text": "(Levy and Goldberg, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 620,
"end": 642,
"text": "Pad\u00f3 and Lapata, 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 431,
"end": 438,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "6"
},
{
"text": "In this paper we introduced the Latent Tree Language Model. Our model discovers the latent tree structures hidden in natural text and uses them to predict the words in a sentence. Our experiments with English and Czech corpora showed dramatic improvements in the predictive ability compared with standalone Modified Kneser-Ney LM. Our Java implementation is available for research purposes at https://github.com/brychcin/LTLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "It was beyond the scope of this paper to explicitly test the semantic and syntactic properties of the model. As the main direction for future work we plan to investigate these properties for example by comparison with human-assigned judgments. Also, we want to test our model in different NLP tasks (e.g. speech recognition, machine translation, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "We think that the role-by-role distribution should depend on the distance between the parent and the child, but our preliminary experiments were not met with success. We plan to elaborate on this assumption. Another idea we want to explore is to use different distributions as a prior to multinomials. For example, Blei and Lafferty (2006) showed that the logistic-normal distribution works well for topic modeling because it captures the correlations between topics. The same idea might work for roles.",
"cite_spans": [
{
"start": 315,
"end": 339,
"text": "Blei and Lafferty (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Implementation is available at http://rnnlm.org/. Size of the hidden layer was set to 300 in our experiments. It was computationally intractable to use more neurons.4 Implementation is available at http://liir.cs. kuleuven.be/software.php.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme \"Projects of Large Research, Development, and Innovations Infrastructures\". Lastly, we would like to thank the anonymous reviewers for their insightful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Research, 3:1137-1155, March.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Correlated topic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Blei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei and John D. Lafferty. 2006. Correlated topic models. In In Proceedings of the 23rd Interna- tional Conference on Machine Learning, pages 113- 120. MIT Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The joy of parallelism with czeng 1.0",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Zden\u011bk\u017eabokrtsk\u00fd",
"suffix": ""
},
{
"first": "Petra",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Galu\u0161\u010d\u00e1kov\u00e1",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Majli\u0161",
"suffix": ""
},
{
"first": "Ji\u0159\u00ed",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Mar\u0161\u00edk",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "Ale\u0161",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tamchyna",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Zden\u011bk\u017dabokrtsk\u00fd, Ond\u0159ej Du\u0161ek, Pe- tra Galu\u0161\u010d\u00e1kov\u00e1, Martin Majli\u0161, David Mare\u010dek, Ji\u0159\u00ed Mar\u0161\u00edk, Michal Nov\u00e1k, Martin Popel, and Ale\u0161 Tam- chyna. 2012. The joy of parallelism with czeng 1.0. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey, may. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Classbased n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Computa- tional Linguistics, 18:467-479.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic spaces for improving language modeling",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Brychc\u00edn",
"suffix": ""
},
{
"first": "Miloslav",
"middle": [],
"last": "Konop\u00edk",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Speech & Language",
"volume": "28",
"issue": "1",
"pages": "192--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Brychc\u00edn and Miloslav Konop\u00edk. 2014. Semantic spaces for improving language modeling. Computer Speech & Language, 28(1):192-209.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent semantics in language models",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Brychc\u00edn",
"suffix": ""
},
{
"first": "Miloslav",
"middle": [],
"last": "Konop\u00edk",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer Speech & Language",
"volume": "33",
"issue": "1",
"pages": "88--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Brychc\u00edn and Miloslav Konop\u00edk. 2015. Latent semantics in language models. Computer Speech & Language, 33(1):88-108.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"T"
],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Joshua T. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, Computer Science Group, Harvard University.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Logistic normal priors for unsupervised probabilistic grammar induction",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Gimpel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 21",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen, Kevin Gimpel, and Noah A. Smith. 2009. Logistic normal priors for unsupervised prob- abilistic grammar induction. In Advances in Neural Information Processing Systems 21, pages 1-8.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Maximum likelihood from incomplete data via the em algorithm",
"authors": [
{
"first": "Arthur",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society. Series B",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Se- ries B, 39(1):1-38.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The latent words language model",
"authors": [
{
"first": "Koen",
"middle": [],
"last": "Deschacht",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"De"
],
"last": "Belder",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2012,
"venue": "Computer Speech & Language",
"volume": "26",
"issue": "5",
"pages": "384--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koen Deschacht, Jan De Belder, and Marie-Francine Moens. 2012. The latent words language model. Computer Speech & Language, 26(5):384-409.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "146--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146-162.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving unsupervised dependency parsing with richer contexts and smoothing",
"authors": [
{
"first": "P",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Headden",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William P. Headden III, Mark Johnson, and David Mc- Closky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Pro- ceedings of Human Language Technologies: The 2009",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "101--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 101-109, Boulder, Colorado, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dependency parsing. Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis Lectures on Hu- man Language Technologies, 2(1):1-127.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "To cnf or not to cnf? an efficient yet presentable version of the cyk algorithm",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Lange",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Lei\u00df",
"suffix": ""
}
],
"year": 2009,
"venue": "Informatica Didactica",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Lange and Hans Lei\u00df. 2009. To cnf or not to cnf? an efficient yet presentable version of the cyk algorithm. Informatica Didactica, 8.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, Maryland, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Stopprobability estimates computed on a large corpus improve unsupervised dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "281--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mare\u010dek and Milan Straka. 2013. Stop- probability estimates computed on a large corpus im- prove unsupervised dependency parsing. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 281-290, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Algorithms for bigram and trigram word clustering",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Jorg",
"middle": [],
"last": "Liermann",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1998,
"venue": "Speech Communication",
"volume": "24",
"issue": "1",
"pages": "19--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven Martin, Jorg Liermann, and Hermann Ney. 1998. Algorithms for bigram and trigram word clustering. Speech Communication, 24(1):19-37.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technology and Em- pirical Methods in Natural Language Processing, HLT '05, pages 523-530, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010)",
"volume": "2010",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010), volume 2010, pages 1045-1048. International Speech Communication Association.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extensions of recurrent neural network language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Kombrink",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ja\u0148",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "5528--5531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Stefan Kombrink, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u00fd, and Sanjeev Khudanpur. 2011. Exten- sions of recurrent neural network language model. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 5528-5531, Prague Congress Center, Prague, Czech Republic.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Estimating a dirichlet distribution",
"authors": [
{
"first": "Thomas",
"middle": [
"P"
],
"last": "Minka",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas P. Minka. 2003. Estimating a dirichlet distribu- tion. Technical report.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dependencybased construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency- based construction of semantic space models. Compu- tational Linguistics, 33(2):161-199, June.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Perplexity of n-gram and dependency language models",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 13th International Conference on Text, Speech and Dialogue, TSD'10",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Popel and David Mare\u010dek. 2010. Perplex- ity of n-gram and dependency language models. In Proceedings of the 13th International Conference on Text, Speech and Dialogue, TSD'10, pages 173-180, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Viterbi training improves unsupervised dependency parsing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "9--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D. Manning. 2010. Viterbi training improves unsupervised dependency parsing. In Pro- ceedings of the Fourteenth Conference on Computa- tional Natural Language Learning, pages 9-17, Up- psala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised dependency parsing without gold part-of-speech tags",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Angel",
"middle": [
"X"
],
"last": "Alshawi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1281--1290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1281-1290, Ed- inburgh, Scotland, UK., July. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Language modelling for russian and english using words and classes",
"authors": [
{
"first": "W",
"middle": [
"D"
],
"last": "Edward",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"C"
],
"last": "Whittaker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Woodland",
"suffix": ""
}
],
"year": 2003,
"venue": "Computer Speech & Language",
"volume": "17",
"issue": "1",
"pages": "87--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward W. D. Whittaker and Philip C. Woodland. 2003. Language modelling for russian and english using words and classes. Computer Speech & Language, 17(1):87-104.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of LTLM for the sentence \"Everything has beauty, but not everyone sees it.\"",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "posterior predictive distribution is the distribution of an unobserved variable conditioned by the observed data, i.e. P (Xn+1|X1, ..., Xn), where Xi are i.i.d. (independent and identically distributed random variables).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "(a) The root has two or more children.(b)The root has only one child.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Searching the most probable subtrees.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "a,c |k) = max b,m:a\u2264b\u2264c,1\u2264m\u2264K [P (w s,b |r s,b = m) \u00d7 P (r s,b = m|k)\u00d7",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "b+1,c |m)]. (12) This formula is visualized on",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF6": {
"text": "Learning curves of LTLM for both English and Czech. The points in the graphs represent the perplexities in every 100th iteration.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF7": {
"text": "Model weights optimized on development data when interpolated with 4-gram MKN LM. results are shown inTable 2. Note that these perplexities are not comparable with those onFigure 3(PPX(P (w)) vs. PPX(P (w, G))). Weights of LTLM and STLM when interpolated with MKN LM are shown onFigure 4.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "Corpora statistics. OOV rate denotes the out-of-vocabulary rate.",
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"num": null,
"text": "Perplexity results on the test data. The numbers in brackets are the relative improvements compared with standalone 4-gram MKN LM.",
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>Model\\roles</td><td>10</td><td>20</td><td>EN 50 100 200 500 1000</td><td>10</td><td>20</td><td>CS 50 100 200 500 1000</td></tr><tr><td/><td/><td/><td/><td/><td/><td>7 41.3</td></tr><tr><td colspan=\"7\">4-gram MKN + det. LTLM 39.9 36.4 32.8 30.3 28.1 26.0 24.9 64.4 56.1 51.5 47.3 43.4 39.9 37.2</td></tr><tr><td/><td/><td/><td/><td/><td/><td>).</td></tr><tr><td/><td/><td/><td colspan=\"4\">Syntactic dependency trees for both languages are</td></tr><tr><td/><td/><td/><td colspan=\"4\">provided within CzEng corpus and are based on</td></tr></table>",
"num": null,
"text": "STLM 408.5 335.2 261.7 212.6 178.9 137.8 113.7 992.7 764.2 556.4 451.0 365.9 265.7 211.0 non-det. LTLM 329.5 215.1 160.4 126.5 105.6 86.7 78.4 851.0 536.6 367.4 292.6 235.2 186.1 157.6 det. LTLM 252.4 166.4 115.3 92.0 75.4 60.9 54.2 708.5 390.2 267.8 213.2 167.9 133.5 111.1 4-gram MKN + STLM 42.7 41.6 39.9 37.9 36.3 34.9 33.6 67.5 65.1 61.4 58.3 55.5 52.4 50.1 4-gram MKN + non-det. LTLM 41.1 38.0 35.2 32.7 30.7 28.9 27.8 65.8 59.4 55.1 51.1 47.5 43.",
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"num": null,
"text": "Perplexity results on the test data for LTLMs and STLMs with different number of roles. Deterministic inference is denoted as det. and non-deterministic inference as non-det.",
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>everything it that let there something nothing everything here someone god</td><td>has 's is was knows really says comes does gets has</td><td>beauty one thing life name father mother way wife place idea</td><td>, , ; --... : ( ? naught</td><td>but but course though or perhaps and maybe although yet except</td><td>not was it not this that the now had &lt;unk&gt; all</td><td>everyone he i she they that it who you someone which</td><td>sees saw made found took gave told felt thought knew heard</td><td>it him it her them his me a out that himself</td><td>. . ! ... ' what \" how why --</td></tr></table>",
"num": null,
"text": "From the tables we can see several important findings. Standalone LTLM performs worse than MKN on both languages, however their combination leads to dramatic improvements compared with other LMs. Best results are achieved by 4gram MKN interpolated with 1000 roles LTLM and the deterministic inference. The perplexity was improved by approximately 46% on English and 49% on Czech compared with standalone MKN. The deterministic inference outperformed the nondeterministic one in all cases. LTLM also signifi-",
"html": null,
"type_str": "table"
}
}
}
}