text
stringlengths
12
14.7k
Detailed balance : For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N Q D B ( N ) = c o n e , _(N)=\(w_^(N)-w_^(N))\ |\ r=1,\ldots ,m\, where cone stands for the conical hull and the piecewise-constant functions s g n ( w r + ( N ) − w r − ( N ) ) (w_^(N)-w_^(N)) do not depend on (positive) values of equilibrium reaction rates w r e q ^ and are defined by thermodynamic quantities under assumption of detailed balance. The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone Q D B ( N ) _(N) . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance.
Detailed balance : Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A 1 ⟶ A 2 ⟶ A 3 ⟶ A 1 cannot be obtained as such a limit but the reaction mechanism A 1 ⟶ A 2 ⟶ A 3 ⟵ A 1 can. Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways.
Detailed balance : T-symmetry Microscopic reversibility Master equation Balance equation Gibbs sampling Metropolis–Hastings algorithm Atomic spectral line (deduction of the Einstein coefficients) Random walks on graphs == References ==
Diffusion model : In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of three major components: the forward process, the reverse process, and the sampling procedure. The goal of diffusion models is to learn a diffusion process for a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs a random walk with drift through the space of all possible data. A trained diffusion model can be sampled in many ways, with different efficiency and quality. There are various equivalent formalisms, including Markov chains, denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. They are typically trained using variational inference. The model responsible for denoising is typically called its "backbone". The backbone may be of any kind, but they are typically U-nets or transformers. As of 2024, diffusion models are mainly used for computer vision tasks, including image denoising, inpainting, super-resolution, image generation, and video generation. These typically involve training a neural network to sequentially denoise images blurred with Gaussian noise. The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise, and applying the network iteratively to denoise the image. Diffusion-based image generators have seen widespread commercial interest, such as Stable Diffusion and DALL-E. These models typically combine diffusion models with other models, such as text-encoders and cross-attention modules to allow text-conditioned generation. Other than computer vision, diffusion models have also found applications in natural language processing such as text generation and summarization, sound generation, and reinforcement learning.
Diffusion model : Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).
Diffusion model : DDPM and score-based generative models are equivalent. This means that a network trained using DDPM can be used as a NCSN, and vice versa. We know that x t | x 0 ∼ N ( α ¯ t x 0 , σ t 2 I ) |x_\sim N\left(_x_,\sigma _^I\right) , so by Tweedie's formula, we have ∇ x t ln ⁡ q ( x t ) = 1 σ t 2 ( − x t + α ¯ t E q [ x 0 | x t ] ) \ln q(x_)=^(-x_+_E_[x_|x_]) As described previously, the DDPM loss function is ∑ t L s i m p l e , t L_ with L s i m p l e , t = E x 0 ∼ q ; z ∼ N ( 0 , I ) [ ‖ ϵ θ ( x t , t ) − z ‖ 2 ] =E_\sim q;z\sim (0,I)\left[\left\|\epsilon _(x_,t)-z\right\|^\right] where x t = α ¯ t x 0 + σ t z =_x_+\sigma _z . By a change of variables, L s i m p l e , t = E x 0 , x t ∼ q [ ‖ ϵ θ ( x t , t ) − x t − α ¯ t x 0 σ t ‖ 2 ] = E x t ∼ q , x 0 ∼ q ( ⋅ | x t ) [ ‖ ϵ θ ( x t , t ) − x t − α ¯ t x 0 σ t ‖ 2 ] =E_,x_\sim q\left[\left\|\epsilon _(x_,t)--_x_\right\|^\right]=E_\sim q,x_\sim q(\cdot |x_)\left[\left\|\epsilon _(x_,t)--_x_\right\|^\right] and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we have ϵ θ ( x t , t ) = x t − α ¯ t E q [ x 0 | x t ] σ t = − σ t ∇ x t ln ⁡ q ( x t ) (x_,t)=-_E_[x_|x_]=-\sigma _\nabla _\ln q(x_) Thus, a score-based network predicts noise, and can be used for denoising. Conversely, the continuous limit x t − 1 = x t − d t , β t = β ( t ) d t , z t d t = d W t =x_,\beta _=\beta (t)dt,z_=dW_ of the backward equation x t − 1 = x t α t − β t σ t α t ϵ θ ( x t , t ) + β t z t ; z t ∼ N ( 0 , I ) =-\epsilon _(x_,t)+z_;\quad z_\sim (0,I) gives us precisely the same equation as score-based diffusion: x t − d t = x t ( 1 + β ( t ) d t / 2 ) + β ( t ) ∇ x t ln ⁡ q ( x t ) d t + β ( t ) d W t =x_(1+\beta (t)dt/2)+\beta (t)\nabla _\ln q(x_)dt+dW_ Thus, at infinitesimal steps of DDPM, a denoising network performs score-based diffusion.
Diffusion model : Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function ∇ ln ⁡ p t . In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes are SDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through an Itô process and one can retrieve the deterministic process by using the Probability ODE flow formulation. In flow-based diffusion models, the forward process is a deterministic flow along a time-dependent vector field, and the backward process is also a deterministic flow along the same vector field, but going backwards. Both processes are solutions to ODEs. If the vector field is well-behaved, the ODE will also be well-behaved. Given two distributions π 0 and π 1 , a flow-based model is a time-dependent velocity field v t ( x ) (x) in [ 0 , 1 ] × R d ^ , such that if we start by sampling a point x ∼ π 0 , and let it move according to the velocity field: d d t ϕ t ( x ) = v t ( ϕ t ( x ) ) t ∈ [ 0 , 1 ] , starting from ϕ 0 ( x ) = x \phi _(x)=v_(\phi _(x))\quad t\in [0,1],\quad \phi _(x)=x we end up with a point x 1 ∼ π 1 \sim \pi _ . The solution ϕ t of the above ODE define a probability path p t = [ ϕ t ] # π 0 =[\phi _]_\pi _ by the pushforward measure operator. In particular, [ ϕ 1 ] # π 0 = π 1 ]_\pi _=\pi _ . The probability path and the velocity field also satisfy the continuity equation, in the sense of probability distribution: ∂ t p t + ∇ ⋅ ( v t p t ) = 0 p_+\nabla \cdot (v_p_)=0 To construct a probability path, we start by construct a conditional probability path p t ( x | z ) (x\vert z) and the corresponding conditional velocity field v t ( x | z ) (x\vert z) on some conditional distribution q ( z ) . A natural choice is the Gaussian conditional probability path: p t ( x | z ) = N ( m t ( z ) , ζ t 2 I ) (x\vert z)=\left(m_(z),\zeta _^I\right) The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path is v t ( x | z ) = ζ t ′ ζ t ( x − m t ( z ) ) + m t ′ ( z ) (x\vert z)='(x-m_(z))+m_'(z) The probability path and velocity field are then computed by marginalizing p t ( x ) = ∫ p t ( x | z ) q ( z ) d z and v t ( x ) = E q ( z ) [ v t ( x | z ) p t ( x | z ) p t ( x ) ] (x)=\int p_(x\vert z)q(z)dz\qquad \qquad v_(x)=\mathbb _\left[(x\vert z)p_(x\vert z)(x)\right]
Diffusion model : This section collects some notable diffusion models, and briefly describes their architecture.
Diffusion model : Diffusion process Markov chain Variational inference Variational autoencoder
Diffusion model : Review papers Yang, Ling (2024-09-06), YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy, retrieved 2024-09-06 Yang, Ling; Zhang, Zhilong; Song, Yang; Hong, Shenda; Xu, Runsheng; Zhao, Yue; Zhang, Wentao; Cui, Bin; Yang, Ming-Hsuan (2023-11-09). "Diffusion Models: A Comprehensive Survey of Methods and Applications". ACM Comput. Surv. 56 (4): 105:1–105:39. arXiv:2209.00796. doi:10.1145/3626235. ISSN 0360-0300. Austin, Jacob; Johnson, Daniel D.; Ho, Jonathan; Tarlow, Daniel; Rianne van den Berg (2021). "Structured Denoising Diffusion Models in Discrete State-Spaces". arXiv:2107.03006 [cs.LG]. Croitoru, Florinel-Alin; Hondru, Vlad; Ionescu, Radu Tudor; Shah, Mubarak (2023-09-01). "Diffusion Models in Vision: A Survey". IEEE Transactions on Pattern Analysis and Machine Intelligence. 45 (9): 10850–10869. arXiv:2209.04747. doi:10.1109/TPAMI.2023.3261988. ISSN 0162-8828. PMID 37030794. Mathematical details omitted in the article. "Power of Diffusion Models". AstraBlog. 2022-09-25. Retrieved 2023-09-25. Luo, Calvin (2022-08-25). "Understanding Diffusion Models: A Unified Perspective". arXiv:2208.11970 [cs.LG]. Weng, Lilian (2021-07-11). "What are Diffusion Models?". lilianweng.github.io. Retrieved 2023-09-25. Tutorials Nakkiran, Preetum; Bradley, Arwen; Zhou, Hattie; Advani, Madhu (2024). "Step-by-Step Diffusion: An Elementary Tutorial". arXiv:2406.08929 [cs.LG]. "Guidance: a cheat code for diffusion models". 26 May 2022. Overview of classifier guidance and classifier-free guidance, light on mathematical details. == References ==
Discrete phase-type distribution : The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random variable describing the time until absorption of an absorbing Markov chain with one absorbing state. Each of the states of the Markov chain represents one of the phases. It has continuous time equivalent in the phase-type distribution.
Discrete phase-type distribution : A terminating Markov chain is a Markov chain where all states are transient, except one which is absorbing. Reordering the states, the transition probability matrix of a terminating Markov chain with m transient states is P = [ T T 0 0 T 1 ] , =\left[&\mathbf ^\\\mathbf ^&1\end\right], where T is a m × m matrix, T 0 ^ and 0 are column vectors with m entries, and T 0 + T 1 = 1 ^+\mathbf =\mathbf . The transition matrix is characterized entirely by its upper-left block T . Definition. A distribution on is a discrete phase-type distribution if it is the distribution of the first passage time to the absorbing state of a terminating Markov chain with finitely many states.
Discrete phase-type distribution : Fix a terminating Markov chain. Denote T the upper-left block of its transition matrix and τ the initial distribution. The distribution of the first time to the absorbing state is denoted P H d ( τ , T ) _(,) or D P H ( τ , T ) (,) . Its cumulative distribution function is F ( k ) = 1 − τ T k 1 , ^\mathbf , for k = 1 , 2 , . . . , and its density function is f ( k ) = τ T k − 1 T 0 , ^\mathbf , for k = 1 , 2 , . . . . It is assumed the probability of process starting in the absorbing state is zero. The factorial moments of the distribution function are given by, E [ K ( K − 1 ) . . . ( K − n + 1 ) ] = n ! τ ( I − T ) − n T n − 1 1 , (I-)^^\mathbf , where I is the appropriate dimension identity matrix.
Discrete phase-type distribution : Just as the continuous time distribution is a generalisation of the exponential distribution, the discrete time distribution is a generalisation of the geometric distribution, for example: Degenerate distribution, point mass at zero or the empty phase-type distribution – 0 phases. Geometric distribution – 1 phase. Negative binomial distribution – 2 or more identical phases in sequence. Mixed Geometric distribution – 2 or more non-identical phases, that each have a probability of occurring in a mutually exclusive, or parallel, manner. This is the discrete analogue of the Hyperexponential distribution, but it is not called the Hypergeometric distribution, since that name is in use for an entirely different type of discrete distribution.
Discrete phase-type distribution : Phase-type distribution Queueing model Queueing theory
Discrete phase-type distribution : M. F. Neuts. Matrix-Geometric Solutions in Stochastic Models: an Algorithmic Approach, Chapter 2: Probability Distributions of Phase Type; Dover Publications Inc., 1981. G. Latouche, V. Ramaswami. Introduction to Matrix Analytic Methods in Stochastic Modelling, 1st edition. Chapter 2: PH Distributions; ASA SIAM, 1999.
Dynamic Markov compression : Dynamic Markov compression (DMC) is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool. It uses predictive arithmetic coding similar to prediction by partial matching (PPM), except that the input is predicted one bit at a time (rather than one byte at a time). DMC has a good compression ratio and moderate speed, similar to PPM, but requires somewhat more memory and is not widely implemented. Some recent implementations include the experimental compression programs hook by Nania Francesco Antonio, ocamyd by Frank Schwellinger, and as a submodel in paq8l by Matt Mahoney. These are based on the 1993 implementation in C by Gordon Cormack.
Dynamic Markov compression : DMC predicts and codes one bit at a time. It differs from PPM in that it codes bits rather than bytes, and from context mixing algorithms such as PAQ in that there is only one context per prediction. The predicted bit is then coded using arithmetic coding.
Dynamic Markov compression : Data Compression Using Dynamic Markov Modelling Google Developers YouTube channel: Compressor Head Episode 3 (Markov Chain Compression) ( Page will play audio when loaded)
Dynamics of Markovian particles : Dynamics of Markovian particles (DMP) is the basis of a theory for kinetics of particles in open heterogeneous systems. It can be looked upon as an application of the notion of stochastic process conceived as a physical entity; e.g. the particle moves because there is a transition probability acting on it. Two particular features of DMP might be noticed: (1) an ergodic-like relation between the motion of particle and the corresponding steady state, and (2) the classic notion of geometric volume appears nowhere (e.g. a concept such as flow of "substance" is not expressed as liters per time unit but as number of particles per time unit). Although primitive, DMP has been applied for solving a classic paradox of the absorption of mercury by fish and by mollusks. The theory has also been applied for a purely probabilistic derivation of the fundamental physical principle: conservation of mass; this might be looked upon as a contribution to the old and ongoing discussion of the relation between physics and probability theory.
Dynamics of Markovian particles : Bergner—DMP, a kinetics of macroscopic particles in open heterogeneous systems
Kruskal count : The Kruskal count (also known as Kruskal's principle, Dynkin–Kruskal count, Dynkin's counting trick, Dynkin's card trick, coupling card trick or shift coupling) is a probabilistic concept originally demonstrated by the Russian mathematician Evgenii Borisovich Dynkin in the 1950s or 1960s discussing coupling effects and rediscovered as a card trick by the American mathematician Martin David Kruskal in the early 1970s as a side-product while working on another problem. It was published by Kruskal's friend Martin Gardner and magician Karl Fulves in 1975. This is related to a similar trick published by magician Alexander F. Kraus in 1957 as Sum total and later called Kraus principle. Besides uses as a card trick, the underlying phenomenon has applications in cryptography, code breaking, software tamper protection, code self-synchronization, control-flow resynchronization, design of variable-length codes and variable-length instruction sets, web navigation, object alignment, and others.
Kruskal count : The trick is performed with cards, but is more a magical-looking effect than a conventional magic trick. The magician has no access to the cards, which are manipulated by members of the audience. Thus sleight of hand is not possible. Rather the effect is based on the mathematical fact that the output of a Markov chain, under certain conditions, is typically independent of the input. A simplified version using the hands of a clock is as follows. A volunteer picks a number from one to twelve and does not reveal it to the magician. The volunteer is instructed to start from 12 on the clock and move clockwise by a number of spaces equal to the number of letters that the chosen number has when spelled out. This is then repeated, moving by the number of letters in the new number. The output after three or more moves does not depend on the initially chosen number and therefore the magician can predict it.
Kruskal count : Coupling (probability) Discrete logarithm Equifinality Ergodic theory Geometric distribution Overlapping instructions Pollard's kangaroo algorithm Random walk Self-synchronizing code
Kruskal count : Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович]; Uspenskii [Успе́нский], Vladimir Andreyevich [Влади́мир Андре́евич] (1963). Written at University of Moscow, Moscow, Russia. Putnam, Alfred L.; Wirszup, Izaak (eds.). Random Walks (Mathematical Conversations Part 3). Survey of Recent East European Mathematical Literature. Vol. 3. Translated by Whaland, Jr., Norman D.; Titelbaum, Olga A. (1 ed.). Boston, Massachusetts, US: The University of Chicago / D. C. Heath and Company. LCCN 63-19838. Retrieved 2023-09-03. (1+9+80+9+1 pages) [8] (NB. This is a translation of the first Russian edition published as "Математические беседы: Задачи о многоцветной раскраске / Задачи из теории чисел / Случайные блуждания"[9] by GTTI (ГТТИ) in March 1952 as Number 6 in Library of the Mathematics Circle (Библиотека математического кружка). It is based on seminars held at the School Mathematics Circle in 1945/1946 and 1946/1947 at Moscow State University.) Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович] (1965) [1963-03-10, 1962-03-31]. Written at University of Moscow, Moscow, Russia. Markov Processes-I. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete. Vol. I (121). Translated by Fabius, Jaap [at Wikidata]; Greenberg, Vida Lazarus [at Wikidata]; Maitra, Ashok Prasad [at Wikidata]; Majone, Giandomenico (1 ed.). New York, US / Berlin, Germany: Springer-Verlag (Academic Press, Inc.). doi:10.1007/978-3-662-00031-1. ISBN 978-3-662-00033-5. ISSN 0072-7830. LCCN 64-24812. S2CID 251691119. Title-No. 5104. Retrieved 2023-09-02. [10] (xii+365+1 pages); Dynkin, Evgenii Borisovich (1965) [1963-03-10, 1962-03-31]. Written at University of Moscow, Moscow, Russia. Markov Processes-II. Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete. Vol. II (122). Translated by Fabius, Jaap [at Wikidata]; Greenberg, Vida Lazarus [at Wikidata]; Maitra, Ashok Prasad [at Wikidata]; Majone, Giandomenico (1 ed.). New York, US / Berlin, Germany: Springer-Verlag. doi:10.1007/978-3-662-25360-1. ISBN 978-3-662-23320-7. ISSN 0072-7830. LCCN 64-24812. Title-No. 5105. Retrieved 2023-09-02. (viii+274+2 pages) (NB. This was originally published in Russian as "Markovskie prot︠s︡essy" (Марковские процессы) by Fizmatgiz (Физматгиз) in 1963 and translated to English with the assistance of the author.) Dynkin [Ды́нкин], Evgenii Borisovich [Евге́ний Бори́сович]; Yushkevish [Юшкевич], Aleksandr Adol'fovich [Александр Адольфович] [in German] (1969) [1966-01-22]. Written at University of Moscow, Moscow, Russia. Markov Processes: Theorems and Problems (PDF). Translated by Wood, James S. (1 ed.). New York, US: Plenum Press / Plenum Publishing Corporation. LCCN 69-12529. Archived (PDF) from the original on 2023-09-06. Retrieved 2023-09-03. (x+237 pages) (NB. This is a corrected translation of the first Russian edition published as "Теоремы и задачи о процессах Маркова" by Nauka Press (Наука) in 1967 as part of a series on Probability Theory and Mathematical Statistics (Теория вероятностей и математическая статистика) with the assistance of the authors. It is based on lectures held at the Moscow State University in 1962/1963.) Marlo, Edward "Ed" (1976-12-01). Written at Chicago, Illinois, US. Hudson, Charles (ed.). "Approach & Uses for the "Kruskal Kount" / First Presentation Angle / Second Presentation Angle - Checking the Deck / Third Presentation Angle - The 100% Method / Fourth Presentation Angle - "Disaster"". Card Corner. The Linking Ring. Vol. 56, no. 12. Bluffton, Ohio, US: International Brotherhood of Magicians. pp. 82, 83, 83, 84, 85–87. ISSN 0024-4023. Hudson, Charles (1977-10-01). Written at Chicago, Illinois, US. "The Kruskal Principle". Card Corner. The Linking Ring. Vol. 57, no. 10. Bluffton, Ohio, US: International Brotherhood of Magicians. p. 85. ISSN 0024-4023. Gardner, Martin (September 1998). "Ten Amazing Mathematical Tricks". Gardner's Gatherings. Math Horizons. Vol. 6, no. 1. Mathematical Association of America / Taylor & Francis, Ltd. pp. 13–15, 26. ISSN 1072-4117. JSTOR 25678174. (4 pages) Haigh, John (1999). "7. Waiting, waiting, waiting: Packs of cards (2)". Taking Chances: Winning with Probability (1 ed.). Oxford, UK: Oxford University Press Inc. pp. 133–136. ISBN 978-0-19-850291-3. Retrieved 2023-09-06. (4 pages); Haigh, John (2009) [2003]. "7. Waiting, waiting, waiting: Packs of cards (2)". Taking Chances: Winning with Probability (Reprint of 2nd ed.). Oxford, UK: Oxford University Press Inc. pp. 139–142. ISBN 978-0-19-852663-6. Retrieved 2023-09-03. (4 of xiv+373+17 pages) Bean, Gordon (2002). "A Labyrinth in a Labyrinth". In Wolfe, David; Rodgers, Tom (eds.). Puzzlers' Tribute: A Feast for the Mind (1 ed.). CRC Press / Taylor & Francis Group, LLC. pp. 103–106. ISBN 978-1-43986410-4. (xvi+421 pages) Ching, Wai-Ki [at Wikidata]; Lee, Yiu-Fai (September 2005) [2004-05-05]. "A Random Walk on a Circular Path". Miscellany. International Journal of Mathematical Education in Science and Technology. 36 (6). Taylor & Francis, Ltd.: 680–683. doi:10.1080/00207390500064254. eISSN 1464-5211. ISSN 0020-739X. S2CID 121692834. (4 pages) Lee, Yiu-Fai; Ching, Wai-Ki [at Wikidata] (2006-03-07) [2005-09-29]. "On Convergent Probability of a Random Walk" (PDF). Classroom notes. International Journal of Mathematical Education in Science and Technology. 37 (7). Advanced Modeling and Applied Computing Laboratory and Department of Mathematics, The University of Hong Kong, Hong Kong: Taylor & Francis, Ltd.: 833–838. doi:10.1080/00207390600712299. eISSN 1464-5211. ISSN 0020-739X. S2CID 121242696. Archived (PDF) from the original on 2023-09-02. Retrieved 2023-09-02. (6 pages) Humble, Steve "Dr. Maths" (July 2008). "Magic Card Maths". The Montana Mathematics Enthusiast. 5 (2 & 3). Missoula, Montana, US: University of Montana: 327–336. doi:10.54870/1551-3440.1111. ISSN 1551-3440. S2CID 117632058. Article 14. Archived from the original on 2023-09-03. Retrieved 2023-09-02. (1+10 pages) Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2010-11-07) [2009-05-31]. How Long Does it Take to Catch a Wild Kangaroo? (PDF). Proceedings of the forty-first annual ACM symposium on Theory of computing (STOC 2009). pp. 553–560. arXiv:0812.0789. doi:10.1145/1536414.1536490. S2CID 12797847. Archived (PDF) from the original on 2023-08-20. Retrieved 2023-08-20. Grime, James [at Wikidata] (2011). "Kruskal's Count" (PDF). singingbanana.com. Archived (PDF) from the original on 2023-08-19. Retrieved 2023-08-19. (8 pages) Bosko, Lindsey R. (2011). Written at Department of Mathematics, North Carolina State University, Raleigh, North Carolina, US. "Cards, Codes, and Kangaroos" (PDF). The UMAP Journal. Modules and Monographs in Undergraduate Mathematics and its Applications (UMAP) Project. 32 (3). Bedford, Massachusetts, US: Consortium For Mathematics & Its Applications, Inc. (COMAP): 199–236. UMAP Unit 808. Archived (PDF) from the original on 2023-08-19. Retrieved 2023-08-19. West, Bob [at Wikidata] (2011-05-26). "Wikipedia's fixed point". dlab @ EPFL. Lausanne, Switzerland: Data Science Lab, École Polytechnique Fédérale de Lausanne. Archived from the original on 2022-05-23. Retrieved 2023-09-04. [...] it turns out there is a card trick that works exactly the same way. It's called the "Kruskal Count" [...] Humble, Steve "Dr. Maths" (September 2012) [2012-07-02]. Written at Kraków, Poland. Behrends, Ehrhard [in German] (ed.). "Mathematics in the Streets of Kraków" (PDF). EMS Newsletter. No. 85. Zürich, Switzerland: EMS Publishing House / European Mathematical Society. pp. 20–21 [21]. ISSN 1027-488X. Archived (PDF) from the original on 2023-09-02. Retrieved 2023-09-02. p. 21: [...] The Kruscal count [...] [11] (2 pages) Andriesse, Dennis; Bos, Herbert [at Wikidata] (2014-07-10). Written at Vrije Universiteit Amsterdam, Amsterdam, Netherlands. Dietrich, Sven (ed.). Instruction-Level Steganography for Covert Trigger-Based Malware (PDF). 11th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA). Lecture Notes in Computer Science. Egham, UK; Switzerland: Springer International Publishing. pp. 41–50 [45]. doi:10.1007/978-3-319-08509-8_3. eISSN 1611-3349. ISBN 978-3-31908508-1. ISSN 0302-9743. S2CID 4634611. LNCS 8550. Archived (PDF) from the original on 2023-08-26. Retrieved 2023-08-26. (10 pages) Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2014-09-07). Kruskal's Principle and Collision Time for Monotone Transitive Walks on the Integers (PDF). Archived (PDF) from the original on 2023-08-22. Retrieved 2023-08-22. (18 pages) Kijima, Shuji; Montenegro, Ravi [at Wikidata] (2015-03-15) [2015-03-30/2015-04-01]. Written at Gaithersburg, Maryland, US. Katz, Jonathan (ed.). Collision of Random Walks and a Refined Analysis of Attacks on the Discrete Logarithm Problem (PDF). Proceedings of the 18th IACR International Conference on Practice and Theory in Public-Key Cryptography. Lecture Notes in Computer Science. Berlin & Heidelberg, Germany: International Association for Cryptologic Research / Springer Science+Business Media. pp. 127–149. doi:10.1007/978-3-662-46447-2_6. ISBN 978-3-662-46446-5. LNCS 9020. Archived (PDF) from the original on 2023-09-03. Retrieved 2023-09-03. (23 pages) Jose, Harish (2016-06-14) [2016-06-02]. "PDCA and the Roads to Rome: Can a lean purist and a Six Sigma purist reach the same answer to a problem?". Lean. Archived from the original on 2023-09-07. Retrieved 2023-09-07. [12][13] Lamprecht, Daniel; Dimitrov, Dimitar; Helic, Denis; Strohmaier, Markus (2016-08-17). "Evaluating and Improving Navigability of Wikipedia: A Comparative Study of Eight Language Editions". Proceedings of the 12th International Symposium on Open Collaboration (PDF). OpenSym, Berlin, Germany: Association for Computing Machinery. pp. 1–10. doi:10.1145/2957792.2957813. ISBN 978-1-4503-4451-7. S2CID 13244770. Archived (PDF) from the original on 2023-09-04. Retrieved 2021-03-17. Jämthagen, Christopher (November 2016). On Offensive and Defensive Methods in Software Security (PDF) (Thesis). Lund, Sweden: Department of Electrical and Information Technology, Lund University. p. 96. ISBN 978-91-7623-942-1. ISSN 1654-790X. Archived (PDF) from the original on 2023-08-26. Retrieved 2023-08-26. (1+xvii+1+152 pages) Mannam, Pragna; Volkov, Jr., Alexander; Paolini, Robert; Chirikjian, Gregory Scott; Mason, Matthew Thomas (2019-02-06) [2018-12-04]. "Sensorless Pose Determination Using Randomized Action Sequences". Entropy. 21 (2). Basel, Switzerland: Multidisciplinary Digital Publishing Institute: 154. arXiv:1812.01195. Bibcode:2019Entrp..21..154M. doi:10.3390/e21020154. ISSN 1099-4300. PMC 7514636. PMID 33266870. S2CID 54444590. Article 154. p. 2: [...] The phenomenon, while also reminiscent of contraction mapping, is similar to an interesting card trick called the Kruskal Count [...] so we have dubbed the phenomenon as "Kruskal effect". [...] (13 pages) Blackburn, Simon Robert; Esfahani, Navid Nasr; Kreher, Donald Lawson; Stinson, Douglas "Doug" Robert (2023-08-22) [2022-11-18]. "Constructions and bounds for codes with restricted overlaps". IEEE Transactions on Information Theory. arXiv:2211.10309. (17 pages) (NB. This source does not mention Dynkin or Kruskal specifically.)
Kruskal count : Humble, Steve "Dr. Maths" (2010). "Dr. Maths Randomness Show". YouTube (Video). Alchemist Cafe, Dublin, Ireland. Retrieved 2023-09-05. [23:40] "Mathematical Card Trick Source". Close-Up Magic. GeniiForum. 2015–2017. Archived from the original on 2023-09-04. Retrieved 2023-09-05. Behr, Denis, ed. (2023). "Kruskal Principle". Conjuring Archive. Archived from the original on 2023-09-10. Retrieved 2023-09-10.
Entropy rate : In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process. For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value.
Entropy rate : A process X with a countable index gives rise to the sequence of its joint entropies H n ( X 1 , X 2 , … X n ) (X_,X_,\dots X_) . If the limit exists, the entropy rate is defined as H ( X ) := lim n → ∞ 1 n H n . H_. Note that given any sequence ( a n ) n )_ with a 0 = 0 =0 and letting Δ a k := a k − a k − 1 :=a_-a_ , by telescoping one has a n = ∑ k = 1 n Δ a k =^\Delta a_ . The entropy rate thus computes the mean of the first n such entropy changes, with n going to infinity. The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy.
Entropy rate : While X may be understood as a sequence of random variables, the entropy rate H ( X ) represents the average entropy change per one random variable, in the long term. It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property.
Entropy rate : The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning.
Entropy rate : Information source (mathematics) Markov information source Asymptotic equipartition property Maximal entropy random walk - chosen to maximize entropy rate
Entropy rate : Cover, T. and Thomas, J. (1991) Elements of Information Theory, John Wiley and Sons, Inc., ISBN 0-471-06259-6 [1]
Examples of Markov chains : This article contains examples of Markov chains and Markov processes in action. All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chains on a measurable state space.
Examples of Markov chains : Mark V. Shaney Interacting particle system Stochastic cellular automata
Examples of Markov chains : Monopoly as a Markov chain
Forward algorithm : The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is also known as filtering. The forward algorithm is closely related to, but distinct from, the Viterbi algorithm.
Forward algorithm : The forward and backward algorithms should be placed within the context of probability as they appear to simply be names given to a set of standard mathematical procedures within a few fields. For example, neither "forward algorithm" nor "Viterbi" appear in the Cambridge encyclopedia of mathematics. The main observation to take away from these algorithms is how to organize Bayesian updates and inference to be computationally efficient in the context of directed graphs of variables (see sum-product networks). For an HMM such as this one: this probability is written as p ( x t | y 1 : t ) |y_) . Here x ( t ) is the hidden state which is abbreviated as x t and y 1 : t are the observations 1 to t . The backward algorithm complements the forward algorithm by taking into account the future history if one wanted to improve the estimate for past times. This is referred to as smoothing and the forward/backward algorithm computes p ( x t | y 1 : T ) |y_) for 1 < t < T . Thus, the full forward/backward algorithm takes into account all evidence. Note that a belief state can be calculated at each time step, but doing this does not, in a strict sense, produce the most likely state sequence, but rather the most likely state at each time step, given the previous history. In order to achieve the most likely sequence, the Viterbi algorithm is required. It computes the most likely state sequence given the history of observations, that is, the state sequence that maximizes p ( x 0 : t | y 0 : t ) |y_) .
Forward algorithm : The goal of the forward algorithm is to compute the joint probability p ( x t , y 1 : t ) ,y_) , where for notational convenience we have abbreviated x ( t ) as x t and ( y ( 1 ) , y ( 2 ) , . . . , y ( t ) ) as y 1 : t . Once the joint probability p ( x t , y 1 : t ) ,y_) is computed, the other probabilities p ( x t | y 1 : t ) |y_) and p ( y 1 : t ) ) are easily obtained. Both the state x t and observation y t are assumed to be discrete, finite random variables. The hidden Markov model's state transition probabilities p ( x t | x t − 1 ) |x_) , observation/emission probabilities p ( y t | x t ) |x_) , and initial prior probability p ( x 0 ) ) are assumed to be known. Furthermore, the sequence of observations y 1 : t are assumed to be given. Computing p ( x t , y 1 : t ) ,y_) naively would require marginalizing over all possible state sequences \ , the number of which grows exponentially with t . Instead, the forward algorithm takes advantage of the conditional independence rules of the hidden Markov model (HMM) to perform the calculation recursively. To demonstrate the recursion, let α ( x t ) = p ( x t , y 1 : t ) = ∑ x t − 1 p ( x t , x t − 1 , y 1 : t ) )=p(x_,y_)=\sum _p(x_,x_,y_) . Using the chain rule to expand p ( x t , x t − 1 , y 1 : t ) ,x_,y_) , we can then write α ( x t ) = ∑ x t − 1 p ( y t | x t , x t − 1 , y 1 : t − 1 ) p ( x t | x t − 1 , y 1 : t − 1 ) p ( x t − 1 , y 1 : t − 1 ) )=\sum _p(y_|x_,x_,y_)p(x_|x_,y_)p(x_,y_) . Because y t is conditionally independent of everything but x t , and x t is conditionally independent of everything but x t − 1 , this simplifies to α ( x t ) = p ( y t | x t ) ∑ x t − 1 p ( x t | x t − 1 ) α ( x t − 1 ) )=p(y_|x_)\sum _p(x_|x_)\alpha (x_) . Thus, since p ( y t | x t ) |x_) and p ( x t | x t − 1 ) |x_) are given by the model's emission distributions and transition probabilities, which are assumed to be known, one can quickly calculate α ( x t ) ) from α ( x t − 1 ) ) and avoid incurring exponential computation time. The recursion formula given above can be written in a more compact form. Let a i j = p ( x t = i | x t − 1 = j ) =p(x_=i|x_=j) be the transition probabilities and b i j = p ( y t = i | x t = j ) =p(y_=i|x_=j) be the emission probabilities, then α t = b t T ⊙ A α t − 1 _=\mathbf _^\odot \mathbf \mathbf _ where A = [ a i j ] =[a_] is the transition probability matrix, b t _ is the i-th row of the emission probability matrix B = [ b i j ] =[b_] which corresponds to the actual observation y t = i =i at time t , and α t = [ α ( x t = 1 ) , … , α ( x t = n ) ] T _=[\alpha (x_=1),\ldots ,\alpha (x_=n)]^ is the alpha vector. The ⊙ is the hadamard product between the transpose of b t _ and A α t − 1 \mathbf _ . The initial condition is set in accordance to the prior probability over x 0 as α ( x 0 ) = p ( y 0 | x 0 ) p ( x 0 ) )=p(y_|x_)p(x_) . Once the joint probability α ( x t ) = p ( x t , y 1 : t ) )=p(x_,y_) has been computed using the forward algorithm, we can easily obtain the related joint probability p ( y 1 : t ) ) as p ( y 1 : t ) = ∑ x t p ( x t , y 1 : t ) = ∑ x t α ( x t ) )=\sum _p(x_,y_)=\sum _\alpha (x_) and the required conditional probability p ( x t | y 1 : t ) |y_) as p ( x t | y 1 : t ) = p ( x t , y 1 : t ) p ( y 1 : t ) = α ( x t ) ∑ x t α ( x t ) . |y_)=,y_))=)\alpha (x_). Once the conditional probability has been calculated, we can also find the point estimate of x t . For instance, the MAP estimate of x t is given by x ^ t M A P = arg ⁡ max x t p ( x t | y 1 : t ) = arg ⁡ max x t α ( x t ) , _^=\arg \max _\;p(x_|y_)=\arg \max _\;\alpha (x_), while the MMSE estimate of x t is given by x ^ t M M S E = E [ x t | y 1 : t ] = ∑ x t x t p ( x t | y 1 : t ) = ∑ x t x t α ( x t ) ∑ x t α ( x t ) . _^=\mathbb [x_|y_]=\sum _x_p(x_|y_)=x_\alpha (x_)\alpha (x_). The forward algorithm is easily modified to account for observations from variants of the hidden Markov model as well, such as the Markov jump linear system.
Forward algorithm : This example on observing possible states of weather from the observed condition of seaweed. We have observations of seaweed for three consecutive days as dry, damp, and soggy in order. The possible states of weather can be sunny, cloudy, or rainy. In total, there can be 3 3 = 27 =27 such weather sequences. Exploring all such possible state sequences is computationally very expensive. To reduce this complexity, Forward algorithm comes in handy, where the trick lies in using the conditional independence of the sequence steps to calculate partial probabilities, α ( x t ) = p ( x t , y 1 : t ) = p ( y t | x t ) ∑ x t − 1 p ( x t | x t − 1 ) α ( x t − 1 ) )=p(x_,y_)=p(y_|x_)\sum _p(x_|x_)\alpha (x_) as shown in the above derivation. Hence, we can calculate the probabilities as the product of the appropriate observation/emission probability, p ( y t | x t ) |x_) ( probability of state y t seen at time t from previous observation) with the sum of probabilities of reaching that state at time t, calculated using transition probabilities. This reduces complexity of the problem from searching whole search space to just using previously computed α 's and transition probabilities.
Forward algorithm : Complexity of Forward Algorithm is Θ ( n m 2 ) ) , where m is the number of hidden or latent variables, like weather in the example above, and n is the length of the sequence of the observed variable. This is clear reduction from the adhoc method of exploring all the possible states with a complexity of Θ ( n m n ) ) .
Forward algorithm : Hybrid Forward Algorithm: A variant of the Forward Algorithm called Hybrid Forward Algorithm (HFA) can be used for the construction of radial basis function (RBF) neural networks with tunable nodes. The RBF neural network is constructed by the conventional subset selection algorithms. The network structure is determined by combining both the stepwise forward network configuration and the continuous RBF parameter optimization. It is used to efficiently and effectively produce a parsimonious RBF neural network that generalizes well. It is achieved through simultaneous network structure determination and parameter optimization on the continuous parameter space. HFA tackles the mixed integer hard problem using an integrated analytic framework, leading to improved network performance and reduced memory usage for the network construction. Forward Algorithm for Optimal Control in Hybrid Systems: This variant of Forward algorithm is motivated by the structure of manufacturing environments that integrate process and operations control. We derive a new property of the optimal state trajectory structure which holds under a modified condition on the cost function. This allows us to develop a low-complexity, scalable algorithm for explicitly determining the optimal controls, which can be more efficient than Forward Algorithm. Continuous Forward Algorithm: A continuous forward algorithm (CFA) can be used for nonlinear modelling and identification using radial basis function (RBF) neural networks. The proposed algorithm performs the two tasks of network construction and parameter optimization within an integrated analytic framework, and offers two important advantages. First, the model performance can be significantly improved through continuous parameter optimization. Secondly, the neural representation can be built without generating and storing all candidate regressors, leading to significantly reduced memory usage and computational complexity.
Forward algorithm : The forward algorithm is one of the algorithms used to solve the decoding problem. Since the development of speech recognition and pattern recognition and related fields like computational biology which use HMMs, the forward algorithm has gained popularity.
Forward algorithm : The forward algorithm is mostly used in applications that need us to determine the probability of being in a specific state when we know about the sequence of observations. The algorithm can be applied wherever we can train a model as we receive data using Baum-Welch or any general EM algorithm. The Forward algorithm will then tell us about the probability of data with respect to what is expected from our model. One of the applications can be in the domain of Finance, where it can help decide on when to buy or sell tangible assets. It can have applications in all fields where we apply Hidden Markov Models. The popular ones include Natural language processing domains like tagging part-of-speech and speech recognition. Recently it is also being used in the domain of Bioinformatics. Forward algorithm can also be applied to perform Weather speculations. We can have a HMM describing the weather and its relation to the state of observations for few consecutive days (some examples could be dry, damp, soggy, sunny, cloudy, rainy etc.). We can consider calculating the probability of observing any sequence of observations recursively given the HMM. We can then calculate the probability of reaching an intermediate state as the sum of all possible paths to that state. Thus the partial probabilities for the final observation will hold the probability of reaching those states going through all possible paths.
Forward algorithm : Viterbi algorithm Forward-backward algorithm Baum–Welch algorithm
Forward algorithm : Russell and Norvig's Artificial Intelligence, a Modern Approach, starting on page 570 of the 2010 edition, provides a succinct exposition of this and related topics Smyth, Padhraic, David Heckerman, and Michael I. Jordan. "Probabilistic independence networks for hidden Markov probability models." Neural computation 9.2 (1997): 227-269. [1] Read, Jonathon. "Hidden Markov Models and Dynamic Programming." University of Oslo (2011). [2] Kohlschein, Christian, An introduction to Hidden Markov Models [3] Manganiello, Fabio, Mirco Marchetti, and Michele Colajanni. Multistep attack detection and alert correlation in intrusion detection systems. Information Security and Assurance. Springer Berlin Heidelberg, 2011. 101-110. [4] Zhang, Ping, and Christos G. Cassandras. "An improved forward algorithm for optimal control of a class of hybrid systems." Automatic Control, IEEE Transactions on 47.10 (2002): 1735-1739. Stratonovich, R. L. "Conditional markov processes". Theory of Probability & Its Applications 5, no. 2 (1960): 156178.
Forward algorithm : Hidden Markov Model R-Package contains functionality for computing and retrieving forward procedure momentuHMM R-Package provides tools for using and inferring HMMs. GHMM Library for Python The hmm package Haskell library for HMMS, implements Forward algorithm. Library for Java contains Machine Learning and Artificial Intelligence algorithm implementations.
Forward–backward algorithm : The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions o 1 : T := o 1 , … , o T :=o_,\dots ,o_ , i.e. it computes, for all hidden state variables X t ∈ \in \,\dots ,X_\ , the distribution P ( X t | o 1 : T ) \ |\ o_) . This inference task is usually called smoothing. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name forward–backward algorithm. The term forward–backward algorithm is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.
Forward–backward algorithm : In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all t ∈ , the probability of ending up in any particular state given the first t observations in the sequence, i.e. P ( X t | o 1 : t ) \ |\ o_) . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point t , i.e. P ( o t + 1 : T | X t ) \ |\ X_) . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X t | o 1 : T ) = P ( X t | o 1 : t , o t + 1 : T ) ∝ P ( o t + 1 : T | X t ) P ( X t | o 1 : t ) \ |\ o_)=P(X_\ |\ o_,o_)\propto P(o_\ |\ X_)P(X_|o_) The last step follows from an application of the Bayes' rule and the conditional independence of o t + 1 : T and o 1 : t given X t . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the message-passing used in general belief propagation approaches. At each single observation in the sequence, probabilities to be used for calculations at the next observation are computed. The smoothing step can be calculated simultaneously during the backward pass. This step allows the algorithm to take into account any past observations of output for computing more accurate results. The forward–backward algorithm can be used to find the most likely state for any point in time. It cannot, however, be used to find the most likely sequence of states (see Viterbi algorithm).
Forward–backward algorithm : The following description will use matrices of probability values rather than probability distributions, although in general the forward-backward algorithm can be applied to continuous as well as discrete probability models. We transform the probability distributions related to a given hidden Markov model into matrix notation as follows. The transition probabilities P ( X t ∣ X t − 1 ) (X_\mid X_) of a given random variable X t representing all possible states in the hidden Markov model will be represented by the matrix T where the column index j will represent the target state and the row index i represents the start state. A transition from row-vector state π t to the incremental row-vector state π t + 1 is written as π t + 1 = π t T =\mathbf \mathbf . The example below represents a system where the probability of staying in the same state after each step is 70% and the probability of transitioning to the other state is 30%. The transition matrix is then: T = ( 0.7 0.3 0.3 0.7 ) =0.7&0.3\\0.3&0.7\end In a typical Markov model, we would multiply a state vector by this matrix to obtain the probabilities for the subsequent state. In a hidden Markov model the state is unknown, and we instead observe events associated with the possible states. An event matrix of the form: B = ( 0.9 0.1 0.2 0.8 ) =0.9&0.1\\0.2&0.8\end provides the probabilities for observing events given a particular state. In the above example, event 1 will be observed 90% of the time if we are in state 1 while event 2 has a 10% probability of occurring in this state. In contrast, event 1 will only be observed 20% of the time if we are in state 2 and event 2 has an 80% chance of occurring. Given an arbitrary row-vector describing the state of the system ( π ), the probability of observing event j is then: P ( O = j ) = ∑ i π i B i , j (O=j)=\sum _\pi _B_ The probability of a given state leading to the observed event j can be represented in matrix form by multiplying the state row-vector ( π ) with an observation matrix ( O j = d i a g ( B ∗ , o j ) =\mathrm (B_) ) containing only diagonal entries. Continuing the above example, the observation matrix for event 1 would be: O 1 = ( 0.9 0.0 0.0 0.2 ) =0.9&0.0\\0.0&0.2\end This allows us to calculate the new unnormalized probabilities state vector π ′ through Bayes rule, weighting by the likelihood that each element of π generated event 1 as: π ′ = π O 1 =\mathbf \mathbf We can now make this general procedure specific to our series of observations. Assuming an initial state vector π 0 _ , (which can be optimized as a parameter through repetitions of the forward-backward procedure), we begin with f 0 : 0 = π 0 =\mathbf _ , then updating the state distribution and weighting by the likelihood of the first observation: f 0 : 1 = π 0 T O o 1 =\mathbf _\mathbf \mathbf This process can be carried forward with additional observations using: f 0 : t = f 0 : t − 1 T O o t =\mathbf \mathbf \mathbf This value is the forward unnormalized probability vector. The i'th entry of this vector provides: f 0 : t ( i ) = P ( o 1 , o 2 , … , o t , X t = x i | π 0 ) (i)=\mathbf (o_,o_,\dots ,o_,X_=x_|\mathbf _) Typically, we will normalize the probability vector at each step so that its entries sum to 1. A scaling factor is thus introduced at each step such that: f ^ 0 : t = c t − 1 f ^ 0 : t − 1 T O o t _ =c_^\ \mathbf _ \mathbf \mathbf where f ^ 0 : t − 1 _ represents the scaled vector from the previous step and c t represents the scaling factor that causes the resulting vector's entries to sum to 1. The product of the scaling factors is the total probability for observing the given events irrespective of the final states: P ( o 1 , o 2 , … , o t | π 0 ) = ∏ s = 1 t c s (o_,o_,\dots ,o_|\mathbf _)=\prod _^c_ This allows us to interpret the scaled probability vector as: f ^ 0 : t ( i ) = f 0 : t ( i ) ∏ s = 1 t c s = P ( o 1 , o 2 , … , o t , X t = x i | π 0 ) P ( o 1 , o 2 , … , o t | π 0 ) = P ( X t = x i | o 1 , o 2 , … , o t , π 0 ) _ (i)= (i)^c_= (o_,o_,\dots ,o_,X_=x_|\mathbf _) (o_,o_,\dots ,o_|\mathbf _)=\mathbf (X_=x_|o_,o_,\dots ,o_,\mathbf _) We thus find that the product of the scaling factors provides us with the total probability for observing the given sequence up to time t and that the scaled probability vector provides us with the probability of being in each state at this time.
Forward–backward algorithm : A similar procedure can be constructed to find backward probabilities. These intend to provide the probabilities: b t : T ( i ) = P ( o t + 1 , o t + 2 , … , o T | X t = x i ) (i)=\mathbf (o_,o_,\dots ,o_|X_=x_) That is, we now want to assume that we start in a particular state ( X t = x i =x_ ), and we are now interested in the probability of observing all future events from this state. Since the initial state is assumed as given (i.e. the prior probability of this state = 100%), we begin with: b T : T = [ 1 1 1 … ] T =[1\ 1\ 1\ \dots ]^ Notice that we are now using a column vector while the forward probabilities used row vectors. We can then work backwards using: b t − 1 : T = T O t b t : T =\mathbf \mathbf \mathbf While we could normalize this vector as well so that its entries sum to one, this is not usually done. Noting that each entry contains the probability of the future event sequence given a particular initial state, normalizing this vector would be equivalent to applying Bayes' theorem to find the likelihood of each initial state given the future events (assuming uniform priors for the final state vector). However, it is more common to scale this vector using the same c t constants used in the forward probability calculations. b T : T is not scaled, but subsequent operations use: b ^ t − 1 : T = c t − 1 T O t b ^ t : T _ =c_^\mathbf \mathbf \mathbf _ where b ^ t : T _ represents the previous, scaled vector. This result is that the scaled probability vector is related to the backward probabilities by: b ^ t : T ( i ) = b t : T ( i ) ∏ s = t + 1 T c s _ (i)= (i)^c_ This is useful because it allows us to find the total probability of being in each state at a given time, t, by multiplying these values: γ t ( i ) = P ( X t = x i | o 1 , o 2 , … , o T , π 0 ) = P ( o 1 , o 2 , … , o T , X t = x i | π 0 ) P ( o 1 , o 2 , … , o T | π 0 ) = f 0 : t ( i ) ⋅ b t : T ( i ) ∏ s = 1 T c s = f ^ 0 : t ( i ) ⋅ b ^ t : T ( i ) (i)=\mathbf (X_=x_|o_,o_,\dots ,o_,\mathbf _)= (o_,o_,\dots ,o_,X_=x_|\mathbf _) (o_,o_,\dots ,o_|\mathbf _)= (i)\cdot \mathbf (i)^c_=\mathbf _ (i)\cdot \mathbf _ (i) To understand this, we note that f 0 : t ( i ) ⋅ b t : T ( i ) (i)\cdot \mathbf (i) provides the probability for observing the given events in a way that passes through state x i at time t. This probability includes the forward probabilities covering all events up to time t as well as the backward probabilities which include all future events. This is the numerator we are looking for in our equation, and we divide by the total probability of the observation sequence to normalize this value and extract only the probability that X t = x i =x_ . These values are sometimes called the "smoothed values" as they combine the forward and backward probabilities to compute a final probability. The values γ t ( i ) (i) thus provide the probability of being in each state at time t. As such, they are useful for determining the most probable state at any time. The term "most probable state" is somewhat ambiguous. While the most probable state is the most likely to be correct at a given point, the sequence of individually probable states is not likely to be the most probable sequence. This is because the probabilities for each point are calculated independently of each other. They do not take into account the transition probabilities between states, and it is thus possible to get states at two moments (t and t+1) that are both most probable at those time points but which have very little probability of occurring together, i.e. P ( X t = x i , X t + 1 = x j ) ≠ P ( X t = x i ) P ( X t + 1 = x j ) (X_=x_,X_=x_)\neq \mathbf (X_=x_)\mathbf (X_=x_) . The most probable sequence of states that produced an observation sequence can be found using the Viterbi algorithm.
Forward–backward algorithm : This example takes as its basis the umbrella world in Russell & Norvig 2010 Chapter 15 pp. 567 in which we would like to infer the weather given observation of another person either carrying or not carrying an umbrella. We assume two possible states for the weather: state 1 = rain, state 2 = no rain. We assume that the weather has a 70% chance of staying the same each day and a 30% chance of changing. The transition probabilities are then: T = ( 0.7 0.3 0.3 0.7 ) =0.7&0.3\\0.3&0.7\end We also assume each state generates one of two possible events: event 1 = umbrella, event 2 = no umbrella. The conditional probabilities for these occurring in each state are given by the probability matrix: B = ( 0.9 0.1 0.2 0.8 ) =0.9&0.1\\0.2&0.8\end We then observe the following sequence of events: which we will represent in our calculations as: O 1 = ( 0.9 0.0 0.0 0.2 ) O 2 = ( 0.9 0.0 0.0 0.2 ) O 3 = ( 0.1 0.0 0.0 0.8 ) O 4 = ( 0.9 0.0 0.0 0.2 ) O 5 = ( 0.9 0.0 0.0 0.2 ) =0.9&0.0\\0.0&0.2\end~~\mathbf =0.9&0.0\\0.0&0.2\end~~\mathbf =0.1&0.0\\0.0&0.8\end~~\mathbf =0.9&0.0\\0.0&0.2\end~~\mathbf =0.9&0.0\\0.0&0.2\end Note that O 3 differs from the others because of the "no umbrella" observation. In computing the forward probabilities we begin with: f 0 : 0 = ( 0.5 0.5 ) =0.5&0.5\end which is our prior state vector indicating that we don't know which state the weather is in before our observations. While a state vector should be given as a row vector, we will use the transpose of the matrix so that the calculations below are easier to read. Our calculations are then written in the form: ( f ^ 0 : t ) T = c t − 1 O t ( T ) T ( f ^ 0 : t − 1 ) T _ )^=c_^\mathbf (\mathbf )^(\mathbf _ )^ instead of: f ^ 0 : t = c t − 1 f ^ 0 : t − 1 T O t _ =c_^\mathbf _ \mathbf \mathbf Notice that the transformation matrix is also transposed, but in our example the transpose is equal to the original matrix. Performing these calculations and normalizing the results provides: ( f ^ 0 : 1 ) T = c 1 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.5000 0.5000 ) = c 1 − 1 ( 0.4500 0.1000 ) = ( 0.8182 0.1818 ) _ )^=c_^0.9&0.0\\0.0&0.2\end0.7&0.3\\0.3&0.7\end0.5000\\0.5000\end=c_^0.4500\\0.1000\end=0.8182\\0.1818\end ( f ^ 0 : 2 ) T = c 2 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.8182 0.1818 ) = c 2 − 1 ( 0.5645 0.0745 ) = ( 0.8834 0.1166 ) _ )^=c_^0.9&0.0\\0.0&0.2\end0.7&0.3\\0.3&0.7\end0.8182\\0.1818\end=c_^0.5645\\0.0745\end=0.8834\\0.1166\end ( f ^ 0 : 3 ) T = c 3 − 1 ( 0.1 0.0 0.0 0.8 ) ( 0.7 0.3 0.3 0.7 ) ( 0.8834 0.1166 ) = c 3 − 1 ( 0.0653 0.2772 ) = ( 0.1907 0.8093 ) _ )^=c_^0.1&0.0\\0.0&0.8\end0.7&0.3\\0.3&0.7\end0.8834\\0.1166\end=c_^0.0653\\0.2772\end=0.1907\\0.8093\end ( f ^ 0 : 4 ) T = c 4 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.1907 0.8093 ) = c 4 − 1 ( 0.3386 0.1247 ) = ( 0.7308 0.2692 ) _ )^=c_^0.9&0.0\\0.0&0.2\end0.7&0.3\\0.3&0.7\end0.1907\\0.8093\end=c_^0.3386\\0.1247\end=0.7308\\0.2692\end ( f ^ 0 : 5 ) T = c 5 − 1 ( 0.9 0.0 0.0 0.2 ) ( 0.7 0.3 0.3 0.7 ) ( 0.7308 0.2692 ) = c 5 − 1 ( 0.5331 0.0815 ) = ( 0.8673 0.1327 ) _ )^=c_^0.9&0.0\\0.0&0.2\end0.7&0.3\\0.3&0.7\end0.7308\\0.2692\end=c_^0.5331\\0.0815\end=0.8673\\0.1327\end For the backward probabilities, we start with: b 5 : 5 = ( 1.0 1.0 ) =1.0\\1.0\end We are then able to compute (using the observations in reverse order and normalizing with different constants): b ^ 4 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 1.0000 1.0000 ) = α ( 0.6900 0.4100 ) = ( 0.6273 0.3727 ) _ =\alpha 0.7&0.3\\0.3&0.7\end0.9&0.0\\0.0&0.2\end1.0000\\1.0000\end=\alpha 0.6900\\0.4100\end=0.6273\\0.3727\end b ^ 3 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.6273 0.3727 ) = α ( 0.4175 0.2215 ) = ( 0.6533 0.3467 ) _ =\alpha 0.7&0.3\\0.3&0.7\end0.9&0.0\\0.0&0.2\end0.6273\\0.3727\end=\alpha 0.4175\\0.2215\end=0.6533\\0.3467\end b ^ 2 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.1 0.0 0.0 0.8 ) ( 0.6533 0.3467 ) = α ( 0.1289 0.2138 ) = ( 0.3763 0.6237 ) _ =\alpha 0.7&0.3\\0.3&0.7\end0.1&0.0\\0.0&0.8\end0.6533\\0.3467\end=\alpha 0.1289\\0.2138\end=0.3763\\0.6237\end b ^ 1 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.3763 0.6237 ) = α ( 0.2745 0.1889 ) = ( 0.5923 0.4077 ) _ =\alpha 0.7&0.3\\0.3&0.7\end0.9&0.0\\0.0&0.2\end0.3763\\0.6237\end=\alpha 0.2745\\0.1889\end=0.5923\\0.4077\end b ^ 0 : 5 = α ( 0.7 0.3 0.3 0.7 ) ( 0.9 0.0 0.0 0.2 ) ( 0.5923 0.4077 ) = α ( 0.3976 0.2170 ) = ( 0.6469 0.3531 ) _ =\alpha 0.7&0.3\\0.3&0.7\end0.9&0.0\\0.0&0.2\end0.5923\\0.4077\end=\alpha 0.3976\\0.2170\end=0.6469\\0.3531\end Finally, we will compute the smoothed probability values. These results must also be scaled so that its entries sum to 1 because we did not scale the backward probabilities with the c t 's found earlier. The backward probability vectors above thus actually represent the likelihood of each state at time t given the future observations. Because these vectors are proportional to the actual backward probabilities, the result has to be scaled an additional time. ( γ 0 ) T = α ( 0.5000 0.5000 ) ∘ ( 0.6469 0.3531 ) = α ( 0.3235 0.1765 ) = ( 0.6469 0.3531 ) )^=\alpha 0.5000\\0.5000\end\circ 0.6469\\0.3531\end=\alpha 0.3235\\0.1765\end=0.6469\\0.3531\end ( γ 1 ) T = α ( 0.8182 0.1818 ) ∘ ( 0.5923 0.4077 ) = α ( 0.4846 0.0741 ) = ( 0.8673 0.1327 ) )^=\alpha 0.8182\\0.1818\end\circ 0.5923\\0.4077\end=\alpha 0.4846\\0.0741\end=0.8673\\0.1327\end ( γ 2 ) T = α ( 0.8834 0.1166 ) ∘ ( 0.3763 0.6237 ) = α ( 0.3324 0.0728 ) = ( 0.8204 0.1796 ) )^=\alpha 0.8834\\0.1166\end\circ 0.3763\\0.6237\end=\alpha 0.3324\\0.0728\end=0.8204\\0.1796\end ( γ 3 ) T = α ( 0.1907 0.8093 ) ∘ ( 0.6533 0.3467 ) = α ( 0.1246 0.2806 ) = ( 0.3075 0.6925 ) )^=\alpha 0.1907\\0.8093\end\circ 0.6533\\0.3467\end=\alpha 0.1246\\0.2806\end=0.3075\\0.6925\end ( γ 4 ) T = α ( 0.7308 0.2692 ) ∘ ( 0.6273 0.3727 ) = α ( 0.4584 0.1003 ) = ( 0.8204 0.1796 ) )^=\alpha 0.7308\\0.2692\end\circ 0.6273\\0.3727\end=\alpha 0.4584\\0.1003\end=0.8204\\0.1796\end ( γ 5 ) T = α ( 0.8673 0.1327 ) ∘ ( 1.0000 1.0000 ) = α ( 0.8673 0.1327 ) = ( 0.8673 0.1327 ) )^=\alpha 0.8673\\0.1327\end\circ 1.0000\\1.0000\end=\alpha 0.8673\\0.1327\end=0.8673\\0.1327\end Notice that the value of γ 0 is equal to b ^ 0 : 5 _ and that γ 5 is equal to f ^ 0 : 5 _ . This follows naturally because both f ^ 0 : 5 _ and b ^ 0 : 5 _ begin with uniform priors over the initial and final state vectors (respectively) and take into account all of the observations. However, γ 0 will only be equal to b ^ 0 : 5 _ when our initial state vector represents a uniform prior (i.e. all entries are equal). When this is not the case b ^ 0 : 5 _ needs to be combined with the initial state vector to find the most likely initial state. We thus find that the forward probabilities by themselves are sufficient to calculate the most likely final state. Similarly, the backward probabilities can be combined with the initial state vector to provide the most probable initial state given the observations. The forward and backward probabilities need only be combined to infer the most probable states between the initial and final points. The calculations above reveal that the most probable weather state on every day except for the third one was "rain". They tell us more than this, however, as they now provide a way to quantify the probabilities of each state at different times. Perhaps most importantly, our value at γ 5 quantifies our knowledge of the state vector at the end of the observation sequence. We can then use this to predict the probability of the various weather states tomorrow as well as the probability of observing an umbrella.
Forward–backward algorithm : The forward–backward algorithm runs with time complexity O ( S 2 T ) T) in space O ( S T ) , where T is the length of the time sequence and S is the number of symbols in the state alphabet. The algorithm can also run in constant space with time complexity O ( S 2 T 2 ) T^) by recomputing values at each step. For comparison, a brute-force procedure would generate all possible S T state sequences and calculate the joint probability of each state sequence with the observed series of events, which would have time complexity O ( T ⋅ S T ) ) . Brute force is intractable for realistic problems, as the number of possible hidden node sequences typically is extremely high. An enhancement to the general forward-backward algorithm, called the Island algorithm, trades smaller memory usage for longer running time, taking O ( S 2 T log ⁡ T ) T\log T) time and O ( S log ⁡ T ) memory. Furthermore, it is possible to invert the process model to obtain an O ( S ) space, O ( S 2 T ) T) time algorithm, although the inverted process may not exist or be ill-conditioned. In addition, algorithms have been developed to compute f 0 : t + 1 efficiently through online smoothing such as the fixed-lag smoothing (FLS) algorithm.
Forward–backward algorithm : algorithm forward_backward is input: guessState int sequenceIndex output: result if sequenceIndex is past the end of the sequence then return 1 if (guessState, sequenceIndex) has been seen before then return saved result result := 0 for each neighboring state n: result := result + (transition probability from guessState to n given observation element at sequenceIndex) × Backward(n, sequenceIndex + 1) save result for (guessState, sequenceIndex) return result
Forward–backward algorithm : Given HMM (just like in Viterbi algorithm) represented in the Python programming language: We can write the implementation of the forward-backward algorithm like this: The function fwd_bkw takes the following arguments: x is the sequence of observations, e.g. ['normal', 'cold', 'dizzy']; states is the set of hidden states; a_0 is the start probability; a are the transition probabilities; and e are the emission probabilities. For simplicity of code, we assume that the observation sequence x is non-empty and that a[i][j] and e[i][j] is defined for all states i,j. In the running example, the forward-backward algorithm is used as follows:
Forward–backward algorithm : Baum–Welch algorithm Viterbi algorithm BCJR algorithm
Forward–backward algorithm : Lawrence R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77 (2), p. 257–286, February 1989. 10.1109/5.18626 Lawrence R. Rabiner, B. H. Juang (January 1986). "An introduction to hidden Markov models". IEEE ASSP Magazine: 4–15. Eugene Charniak (1993). Statistical Language Learning. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-53141-2. Stuart Russell and Peter Norvig (2010). Artificial Intelligence A Modern Approach 3rd Edition. Upper Saddle River, New Jersey: Pearson Education/Prentice-Hall. ISBN 978-0-13-604259-4.
Forward–backward algorithm : An interactive spreadsheet for teaching the forward–backward algorithm (spreadsheet and article with step-by-step walk-through) Tutorial of hidden Markov models including the forward–backward algorithm Collection of AI algorithms implemented in Java (including HMM and the forward–backward algorithm)
Gene prediction : In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include prediction of other functional elements such as regulatory regions. Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced. In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome, and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem. Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. Predicting the function of a gene and confirming that the gene prediction is accurate still demands in vivo experimentation through gene knockout and other assays, although frontiers of bioinformatics research are making it increasingly possible to predict the function of a gene based on its sequence alone. Gene prediction is one of the key steps in genome annotation, following sequence assembly, the filtering of non-coding regions and repeat masking. Gene prediction is closely related to the so-called 'target search problem' investigating how DNA-binding proteins (transcription factors) locate specific binding sites within the genome. Many aspects of structural gene prediction are based on current understanding of underlying biochemical processes in the cell such as gene transcription, translation, protein–protein interactions and regulation processes, which are subject of active research in the various omics fields such as transcriptomics, proteomics, metabolomics, and more generally structural and functional genomics.
Gene prediction : In empirical (similarity, homology or evidence-based) gene finding systems, the target genome is searched for sequences that are similar to extrinsic evidence in the form of the known expressed sequence tags, messenger RNA (mRNA), protein products, and homologous or orthologous sequences. Given an mRNA sequence, it is trivial to derive a unique genomic DNA sequence from which it had to have been transcribed. Given a protein sequence, a family of possible coding DNA sequences can be derived by reverse translation of the genetic code. Once candidate DNA sequences have been determined, it is a relatively straightforward algorithmic problem to efficiently search a target genome for matches, complete or partial, and exact or inexact. Given a sequence, local alignment algorithms such as BLAST, FASTA and Smith-Waterman look for regions of similarity between the target sequence and possible candidate matches. Matches can be complete or partial, and exact or inexact. The success of this approach is limited by the contents and accuracy of the sequence database. A high degree of similarity to a known messenger RNA or protein product is strong evidence that a region of a target genome is a protein-coding gene. However, to apply this approach systemically requires extensive sequencing of mRNA and protein products. Not only is this expensive, but in complex organisms, only a subset of all genes in the organism's genome are expressed at any given time, meaning that extrinsic evidence for many genes is not readily accessible in any single cell culture. Thus, to collect extrinsic evidence for most or all of the genes in a complex organism requires the study of many hundreds or thousands of cell types, which presents further difficulties. For example, some human genes may be expressed only during development as an embryo or fetus, which might be difficult to study for ethical reasons. Despite these difficulties, extensive transcript and protein sequence databases have been generated for human as well as other important model organisms in biology, such as mice and yeast. For example, the RefSeq database contains transcript and protein sequence from many different species, and the Ensembl system comprehensively maps this evidence to human and several other genomes. It is, however, likely that these databases are both incomplete and contain small but significant amounts of erroneous data. New high-throughput transcriptome sequencing technologies such as RNA-Seq and ChIP-sequencing open opportunities for incorporating additional extrinsic evidence into gene prediction and validation, and allow structurally rich and more accurate alternative to previous methods of measuring gene expression such as expressed sequence tag or DNA microarray. Major challenges involved in gene prediction involve dealing with sequencing errors in raw DNA data, dependence on the quality of the sequence assembly, handling short reads, frameshift mutations, overlapping genes and incomplete genes. In prokaryotes it's essential to consider horizontal gene transfer when searching for gene sequence homology. An additional important factor underused in current gene detection tools is existence of gene clusters — operons (which are functioning units of DNA containing a cluster of genes under the control of a single promoter) in both prokaryotes and eukaryotes. Most popular gene detectors treat each gene in isolation, independent of others, which is not biologically accurate.
Gene prediction : Ab Initio gene prediction is an intrinsic method based on gene content and signal detection. Because of the inherent expense and difficulty in obtaining extrinsic evidence for many genes, it is also necessary to resort to ab initio gene finding, in which the genomic DNA sequence alone is systematically searched for certain tell-tale signs of protein-coding genes. These signs can be broadly categorized as either signals, specific sequences that indicate the presence of a gene nearby, or content, statistical properties of the protein-coding sequence itself. Ab initio gene finding might be more accurately characterized as gene prediction, since extrinsic evidence is generally required to conclusively establish that a putative gene is functional. In the genomes of prokaryotes, genes have specific and relatively well-understood promoter sequences (signals), such as the Pribnow box and transcription factor binding sites, which are easy to systematically identify. Also, the sequence coding for a protein occurs as one contiguous open reading frame (ORF), which is typically many hundred or thousands of base pairs long. The statistics of stop codons are such that even finding an open reading frame of this length is a fairly informative sign. (Since 3 of the 64 possible codons in the genetic code are stop codons, one would expect a stop codon approximately every 20–25 codons, or 60–75 base pairs, in a random sequence.) Furthermore, protein-coding DNA has certain periodicities and other statistical properties that are easy to detect in a sequence of this length. These characteristics make prokaryotic gene finding relatively straightforward, and well-designed systems are able to achieve high levels of accuracy. Ab initio gene finding in eukaryotes, especially complex organisms like humans, is considerably more challenging for several reasons. First, the promoter and other regulatory signals in these genomes are more complex and less well-understood than in prokaryotes, making them more difficult to reliably recognize. Two classic examples of signals identified by eukaryotic gene finders are CpG islands and binding sites for a poly(A) tail. Second, splicing mechanisms employed by eukaryotic cells mean that a particular protein-coding sequence in the genome is divided into several parts (exons), separated by non-coding sequences (introns). (Splice sites are themselves another signal that eukaryotic gene finders are often designed to identify.) A typical protein-coding gene in humans might be divided into a dozen exons, each less than two hundred base pairs in length, and some as short as twenty to thirty. It is therefore much more difficult to detect periodicities and other known content properties of protein-coding DNA in eukaryotes. Advanced gene finders for both prokaryotic and eukaryotic genomes typically use complex probabilistic models, such as hidden Markov models (HMMs) to combine information from a variety of different signal and content measurements. The GLIMMER system is a widely used and highly accurate gene finder for prokaryotes. GeneMark is another popular approach. Eukaryotic ab initio gene finders, by comparison, have achieved only limited success; notable examples are the GENSCAN and geneid programs. The GeneMark-ES and SNAP gene finders are GHMM-based like GENSCAN. They attempt to address problems related to using a gene finder on a genome sequence that it was not trained against. A few recent approaches like mSplicer, CONTRAST, or mGene also use machine learning techniques like support vector machines for successful gene prediction. They build a discriminative model using hidden Markov support vector machines or conditional random fields to learn an accurate gene prediction scoring function. Ab Initio methods have been benchmarked, with some approaching 100% sensitivity, however as the sensitivity increases, accuracy suffers as a result of increased false positives.
Gene prediction : Programs such as Maker combine extrinsic and ab initio approaches by mapping protein and EST data to the genome to validate ab initio predictions. Augustus, which may be used as part of the Maker pipeline, can also incorporate hints in the form of EST alignments or protein profiles to increase the accuracy of the gene prediction.
Gene prediction : As the entire genomes of many different species are sequenced, a promising direction in current research on gene finding is a comparative genomics approach. This is based on the principle that the forces of natural selection cause genes and other functional elements to undergo mutation at a slower rate than the rest of the genome, since mutations in functional elements are more likely to negatively impact the organism than mutations elsewhere. Genes can thus be detected by comparing the genomes of related species to detect this evolutionary pressure for conservation. This approach was first applied to the mouse and human genomes, using programs such as SLAM, SGP and TWINSCAN/N-SCAN and CONTRAST.
Gene prediction : Pseudogenes are close relatives of genes, sharing very high sequence homology, but being unable to code for the same protein product. Whilst once relegated as byproducts of gene sequencing, increasingly, as regulatory roles are being uncovered, they are becoming predictive targets in their own right. Pseudogene prediction utilises existing sequence similarity and ab initio methods, whilst adding additional filtering and methods of identifying pseudogene characteristics. Sequence similarity methods can be customised for pseudogene prediction using additional filtering to find candidate pseudogenes. This could use disablement detection, which looks for nonsense or frameshift mutations that would truncate or collapse an otherwise functional coding sequence. Additionally, translating DNA into proteins sequences can be more effective than just straight DNA homology. Content sensors can be filtered according to the differences in statistical properties between pseudogenes and genes, such as a reduced count of CpG islands in pseudogenes, or the differences in G-C content between pseudogenes and their neighbours. Signal sensors also can be honed to pseudogenes, looking for the absence of introns or polyadenine tails.
Gene prediction : Metagenomics is the study of genetic material recovered from the environment, resulting in sequence information from a pool of organisms. Predicting genes is useful for comparative metagenomics. Metagenomics tools also fall into the basic categories of using either sequence similarity approaches (MEGAN4) and ab initio techniques (GLIMMER-MG). Glimmer-MG is an extension to GLIMMER that relies mostly on an ab initio approach for gene finding and by using training sets from related organisms. The prediction strategy is augmented by classification and clustering gene data sets prior to applying ab initio gene prediction methods. The data is clustered by species. This classification method leverages techniques from metagenomic phylogenetic classification. An example of software for this purpose is, Phymm, which uses interpolated markov models—and PhymmBL, which integrates BLAST into the classification routines. MEGAN4 uses a sequence similarity approach, using local alignment against databases of known sequences, but also attempts to classify using additional information on functional roles, biological pathways and enzymes. As in single organism gene prediction, sequence similarity approaches are limited by the size of the database. FragGeneScan and MetaGeneAnnotator are popular gene prediction programs based on Hidden Markov model. These predictors account for sequencing errors, partial genes and work for short reads. Another fast and accurate tool for gene prediction in metagenomes is MetaGeneMark. This tool is used by the DOE Joint Genome Institute to annotate IMG/M, the largest metagenome collection to date.
Gene prediction : List of gene prediction software Phylogenetic footprinting Protein function prediction Protein structure prediction Protein–protein interaction prediction Pseudogene (database) Sequence mining Sequence similarity (homology)
Gene prediction : Augustus FGENESH Archived 2013-01-04 at archive.today GeMoMa - Homology-based gene prediction based on amino acid and intron position conservation as well as RNA-Seq data geneid, SGP2 Glimmer Archived 2011-08-26 at the Wayback Machine, GlimmerHMM Archived 2011-08-18 at the Wayback Machine GenomeThreader ChemGenome GeneMark Gismo mGene StarORF — A multi-platform and web tool for predicting ORFs and obtaining reverse complement sequence Maker - A portable and easily configurable genome annotation pipeline
Generalized filtering : Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states (and parameters) generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical (e.g. Kalman-Bucy or particle) filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.
Generalized filtering : Definition: Generalized filtering rests on the tuple ( Ω , U , X , S , p , q ) : A sample space Ω from which random fluctuations ω ∈ Ω are drawn Control states U ∈ R – that act as external causes, input or forcing terms Hidden states X : X × U × Ω → R – that cause sensory states and depend on control states Sensor states S : X × U × Ω → R – a probabilistic mapping from hidden and control states Generative density p ( s ~ , x ~ , u ~ ∣ m ) ,,\mid m) – over sensory, hidden and control states under a generative model m Variational density q ( x ~ , u ~ ∣ μ ~ ) ,\mid ) – over hidden and control states with mean μ ~ ∈ R \in \mathbb Here ~ denotes a variable in generalized coordinates of motion: u ~ = [ u , u ′ , u ″ , … ] T =[u,u',u'',\ldots ]^
Generalized filtering : Usually, the generative density or model is specified in terms of a nonlinear input-state-output model with continuous nonlinear functions: s = g ( x , u ) + ω s x ˙ = f ( x , u ) + ω x s&=g(x,u)+\omega _\\&=f(x,u)+\omega _\end The corresponding generalized model (under local linearity assumptions) obtains the from the chain rule s ~ = g ~ ( x ~ , u ~ ) + ω ~ s s = g ( x , u ) + ω s s ′ = ∂ x g ⋅ x ′ + ∂ u g ⋅ u ′ + ω s ′ s ″ = ∂ x g ⋅ x ″ + ∂ u g ⋅ u ″ + ω s ″ ⋮ x ~ ˙ = f ~ ( x ~ , u ~ ) + ω ~ x x ˙ = f ( x , u ) + ω x x ˙ ′ = ∂ x f ⋅ x ′ + ∂ u f ⋅ u ′ + ω x ′ x ˙ ″ = ∂ x f ⋅ x ″ + ∂ u f ⋅ u ″ + ω x ″ ⋮ &=(,)+_\\\\s&=g(x,u)+\omega _\\s'&=\partial _g\cdot x'+\partial _g\cdot u'+\omega '_\\s''&=\partial _g\cdot x''+\partial _g\cdot u''+\omega ''_\\&\vdots \\\end\qquad &=(,)+_\\\\&=f(x,u)+\omega _\\'&=\partial _f\cdot x'+\partial _f\cdot u'+\omega '_\\''&=\partial _f\cdot x''+\partial _f\cdot u''+\omega ''_\\&\vdots \end Gaussian assumptions about the random fluctuations ω then prescribe the likelihood and empirical priors on the motion of hidden states p ( s ~ , x ~ , u ~ | m ) = p ( s ~ | x ~ , u ~ , m ) p ( D x ~ | x , u ~ , m ) p ( x | m ) p ( u ~ | m ) p ( s ~ | x ~ , u ~ , m ) = N ( g ~ ( x ~ , u ~ ) , Σ ~ ( x ~ , u ~ ) s ) p ( D x ~ | x , u ~ , m ) = N ( f ~ ( x ~ , u ~ ) , Σ ~ ( x ~ , u ~ ) x ) p\left(,,\vert m\right)&=p\left(\vert ,,m\right)p\left(\vert x,,m\right)p(x\vert m)p(\vert m)\\p\left(\vert ,,m\right)&=((,),(,)_)\\p\left(\vert x,,m\right)&=((,),(,)_)\\\end The covariances Σ ~ = V ⊗ Σ =V\otimes \Sigma factorize into a covariance among variables and correlations V among generalized fluctuations that encodes their autocorrelation: V = [ 1 0 ρ ¨ ( 0 ) ⋯ 0 − ρ ¨ ( 0 ) 0 ρ ¨ ( 0 ) 0 ρ ¨ ¨ ( 0 ) ⋮ ⋱ ] 1&0&(0)&\cdots \\0&-(0)&0\ &\ \\(0)\ &0\ &(0)\ &\ \\\vdots \ &\ &\ &\ddots \ \\\end Here, ρ ¨ ( 0 ) (0) is the second derivative of the autocorrelation function evaluated at zero. This is a ubiquitous measure of roughness in the theory of stochastic processes. Crucially, the precision (inverse variance) of high order derivatives fall to zero fairly quickly, which means it is only necessary to model relatively low order generalized motion (usually between two and eight) for any given or parameterized autocorrelation function.
Generalized filtering : Generalized filtering has been primarily applied to biological timeseries—in particular functional magnetic resonance imaging and electrophysiological data. This is usually in the context of dynamic causal modelling to make inferences about the underlying architectures of (neuronal) systems generating data. It is also used to simulate inference in terms of generalized (hierarchical) predictive coding in the brain.
Generalized filtering : Dynamic Bayesian network Kalman filter Linear predictive coding Optimal control Particle filter Recursive Bayesian estimation System identification Variational Bayesian methods
Generalized filtering : software demonstrations and applications are available as academic freeware (as Matlab code) in the DEM toolbox of SPM papers collection of technical and application papers
GLIMMER : In bioinformatics, GLIMMER (Gene Locator and Interpolated Markov ModelER) is used to find genes in prokaryotic DNA. "It is effective at finding genes in bacteria, archea, viruses, typically finding 98-99% of all relatively long protein coding genes". GLIMMER was the first system that used the interpolated Markov model to identify coding regions. The GLIMMER software is open source and is maintained by Steven Salzberg, Art Delcher, and their colleagues at the Center for Computational Biology at Johns Hopkins University. The original GLIMMER algorithms and software were designed by Art Delcher, Simon Kasif and Steven Salzberg and applied to bacterial genome annotation in collaboration with Owen White.
GLIMMER : GLIMMER can be downloaded from The Glimmer home page (requires a C++ compiler). Alternatively, an online version is hosted by NCBI [1].
GLIMMER : GLIMMER primarily searches for long-ORFS. An open reading frame might overlap with any other open reading frame which will be resolved using the technique described in the sub section. Using these long-ORFS and following certain amino acid distribution GLIMMER generates training set data. Using these training data, GLIMMER trains all the six Markov models of coding DNA from zero to eight order and also train the model for noncoding DNA GLIMMER tries to calculate the probabilities from the data. Based on the number of observations, GLIMMER determines whether to use fixed order Markov model or interpolated Markov model. If the number of observations are greater than 400, GLIMMER uses fixed order Markov model to obtain there probabilities. If the number of observations are less than 400, GLIMMER uses interpolated Markov model which is briefly explained in the next sub section. GLIMMER obtains score for every long-ORF generated using all the six coding DNA models and also using non-coding DNA model. If the score obtained in the previous step is greater than a certain threshold then GLIMMER predicts it to be a gene. The steps explained above describes the basic functionality of GLIMMER. There are various improvements made to GLIMMER and some of them are described in the following sub-sections.
GLIMMER : Glimmer supports genome annotation efforts on a wide range of bacterial, archaeal, and viral species. In a large-scale reannotation effort at the DNA Data Bank of Japan (DDBJ, which mirrors Genbank). Kosuge et al. (2006) examined the gene finding methods used for 183 genomes. They reported that of these projects, Glimmer was the gene finder for 49%, followed by GeneMark with 12%, with other algorithms used in 3% or fewer of the projects. (They also reported that 33% of genomes used "other" programs, which in many cases meant that they could not identify the method. Excluding those cases, Glimmer was used for 73% of the genomes for which the methods could be unambiguously identified.) Glimmer was used by the DDBJ to re-annotate all bacterial genomes in the International Nucleotide Sequence Databases. It is also being used by this group to annotate viruses. Glimmer is part of the bacterial annotation pipeline at the National Center for Biotechnology Information (NCBI), which also maintains a web server for Glimmer, as do sites in Germany, Canada. According to Google Scholar, as of early 2011 the original Glimmer article (Salzberg et al., 1998) has been cited 581 times, and the Glimmer 2.0 article (Delcher et al., 1999) has been cited 950 times.
GLIMMER : The Glimmer home page at CCB, Johns Hopkins University, from which the software can be downloaded.
Google matrix : A Google matrix is a particular stochastic matrix that is used by Google's PageRank algorithm. The matrix represents a graph with edges representing links between pages. The PageRank of each page can then be generated iteratively from the Google matrix using the power method. However, in order for the power method to converge, the matrix must be stochastic, irreducible and aperiodic.
Google matrix : In order to generate the Google matrix G, we must first generate an adjacency matrix A which represents the relations between pages or nodes. Assuming there are N pages, we can fill out A by doing the following: A matrix element A i , j is filled with 1 if node j has a link to node i , and 0 otherwise; this is the adjacency matrix of links. A related matrix S corresponding to the transitions in a Markov chain of given network is constructed from A by dividing the elements of column "j" by a number of k j = Σ i = 1 N A i , j =\Sigma _^A_ where k j is the total number of outgoing links from node j to all other nodes. The columns having zero matrix elements, corresponding to dangling nodes, are replaced by a constant value 1/N. Such a procedure adds a link from every sink, dangling state a to every other node. Now by the construction the sum of all elements in any column of matrix S is equal to unity. In this way the matrix S is mathematically well defined and it belongs to the class of Markov chains and the class of Perron-Frobenius operators. That makes S suitable for the PageRank algorithm.
Google matrix : Then the final Google matrix G can be expressed via S as: G i j = α S i j + ( 1 − α ) 1 N ( 1 ) =\alpha S_+(1-\alpha )\;\;\;\;\;\;\;\;\;\;\;(1) By the construction the sum of all non-negative elements inside each matrix column is equal to unity. The numerical coefficient α is known as a damping factor. Usually S is a sparse matrix and for modern directed networks it has only about ten nonzero elements in a line or column, thus only about 10N multiplications are needed to multiply a vector by matrix G.
Google matrix : An example of the matrix S construction via Eq.(1) within a simple network is given in the article CheiRank. For the actual matrix, Google uses a damping factor α around 0.85. The term ( 1 − α ) gives a surfer probability to jump randomly on any page. The matrix G belongs to the class of Perron-Frobenius operators of Markov chains. The examples of Google matrix structure are shown in Fig.1 for Wikipedia articles hyperlink network in 2009 at small scale and in Fig.2 for University of Cambridge network in 2006 at large scale.
Google matrix : For 0 < α < 1 there is only one maximal eigenvalue λ = 1 with the corresponding right eigenvector which has non-negative elements P i which can be viewed as stationary probability distribution. These probabilities ordered by their decreasing values give the PageRank vector P i with the PageRank K i used by Google search to rank webpages. Usually one has for the World Wide Web that P ∝ 1 / K β with β ≈ 0.9 . The number of nodes with a given PageRank value scales as N P ∝ 1 / P ν \propto 1/P^ with the exponent ν = 1 + 1 / β ≈ 2.1 . The left eigenvector at λ = 1 has constant matrix elements. With 0 < α all eigenvalues move as λ i → α λ i \rightarrow \alpha \lambda _ except the maximal eigenvalue λ = 1 , which remains unchanged. The PageRank vector varies with α but other eigenvectors with λ i < 1 <1 remain unchanged due to their orthogonality to the constant left vector at λ = 1 . The gap between λ = 1 and other eigenvalue being 1 − α ≈ 0.15 gives a rapid convergence of a random initial vector to the PageRank approximately after 50 multiplications on G matrix. At α = 1 the matrix G has generally many degenerate eigenvalues λ = 1 (see e.g. [6]). Examples of the eigenvalue spectrum of the Google matrix of various directed networks is shown in Fig.3 from and Fig.4 from. The Google matrix can be also constructed for the Ulam networks generated by the Ulam method [8] for dynamical maps. The spectral properties of such matrices are discussed in [9,10,11,12,13,15]. In a number of cases the spectrum is described by the fractal Weyl law [10,12]. The Google matrix can be constructed also for other directed networks, e.g. for the procedure call network of the Linux Kernel software introduced in [15]. In this case the spectrum of λ is described by the fractal Weyl law with the fractal dimension d ≈ 1.3 (see Fig.5 from ). Numerical analysis shows that the eigenstates of matrix G are localized (see Fig.6 from ). Arnoldi iteration method allows to compute many eigenvalues and eigenvectors for matrices of rather large size [13]. Other examples of G matrix include the Google matrix of brain [17] and business process management [18], see also. Applications of Google matrix analysis to DNA sequences is described in [20]. Such a Google matrix approach allows also to analyze entanglement of cultures via ranking of multilingual Wikipedia articles abouts persons [21]
Google matrix : The Google matrix with damping factor was described by Sergey Brin and Larry Page in 1998 [22], see also articles on PageRank history [23],[24].
Google matrix : CheiRank Arnoldi iteration Markov chain Transfer operator Perron–Frobenius theorem Web search engines
Google matrix : Google matrix at Scholarpedia Google PR Shut Down Video lectures at IHES Workshop "Google matrix: fundamental, applications and beyond", Oct 2018
Hidden Markov model : A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or hidden) Markov process (referred to as X ). An HMM requires that there be an observable process Y whose outcomes depend on the outcomes of X in a known way. Since X cannot be observed directly, the goal is to learn about state of X by observing Y . By definition of being a Markov model, an HMM has an additional requirement that the outcome of Y at time t = t 0 must be "influenced" exclusively by the outcome of X at t = t 0 and that the outcomes of X and Y at t < t 0 must be conditionally independent of Y at t = t 0 given X at time t = t 0 . Estimation of the parameters in an HMM can be performed using maximum likelihood estimation. For linear chain HMMs, the Baum–Welch algorithm can be used to estimate parameters. Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition—such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
Hidden Markov model : Let X n and Y n be discrete-time stochastic processes and n ≥ 1 . The pair ( X n , Y n ) ,Y_) is a hidden Markov model if X n is a Markov process whose behavior is not directly observable ("hidden"); P ⁡ ( Y n ∈ A | X 1 = x 1 , … , X n = x n ) = P ⁡ ( Y n ∈ A | X n = x n ) Y_\in A\ \ X_=x_,\ldots ,X_=x_=\operatorname Y_\in A\ \ X_=x_ , for every n ≥ 1 , x 1 , … , x n ,\ldots ,x_ , and every Borel set A . Let X t and Y t be continuous-time stochastic processes. The pair ( X t , Y t ) ,Y_) is a hidden Markov model if X t is a Markov process whose behavior is not directly observable ("hidden"); P ⁡ ( Y t 0 ∈ A ∣ t ≤ t 0 ) = P ⁡ ( Y t 0 ∈ A ∣ X t 0 ∈ B t 0 ) (Y_\in A\mid \\in B_\_)=\operatorname (Y_\in A\mid X_\in B_) , for every t 0 , every Borel set A , and every family of Borel sets t ≤ t 0 \_ .
Hidden Markov model : The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ ). The random variable y(t) is the observation at time t (with y(t) ∈ ). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies. From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1); the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) depends on only the value of the hidden variable x(t) (both at time t). In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time t − 1 . The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time t + 1 , for a total of N 2 transition probabilities. The set of transition probabilities for transitions from any given state must sum to 1. Thus, the N × N matrix of transition probabilities is a Markov matrix. Because any transition probability can be determined once the others are known, there are a total of N ( N − 1 ) transition parameters. In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be M − 1 separate parameters, for a total of N ( M − 1 ) emission parameters over all hidden states. On the other hand, if the observed variable is an M-dimensional vector distributed according to an arbitrary multivariate Gaussian distribution, there will be M parameters controlling the means and M ( M + 1 ) 2 parameters controlling the covariance matrix, for a total of N ( M + M ( M + 1 ) 2 ) = N M ( M + 3 ) 2 = O ( N M 2 ) \right)==O(NM^) emission parameters. (In such a case, unless the value of M is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
Hidden Markov model : Several inference problems are associated with hidden Markov models, as outlined below.
Hidden Markov model : The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and emission probabilities. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum–Welch algorithm or the Baldi–Chauvin algorithm. The Baum–Welch algorithm is a special case of the expectation-maximization algorithm. If the HMMs are used for time series prediction, more sophisticated Bayesian inference methods, like Markov chain Monte Carlo (MCMC) sampling are proven to be favorable over finding a single maximum likelihood model both in terms of accuracy and stability. Since MCMC imposes significant computational burden, in cases where computational scalability is also of interest, one may alternatively resort to variational approximations to Bayesian inference, e.g. Indeed, approximate variational inference offers computational efficiency comparable to expectation-maximization, while yielding an accuracy profile only slightly inferior to exact MCMC-type Bayesian inference.
Hidden Markov model : HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depend on the sequence are). Applications include: Computational finance Single-molecule kinetic analysis Neuroscience Cryptanalysis Speech recognition, including Siri Speech synthesis Part-of-speech tagging Document separation in scanning solutions Machine translation Partial discharge Gene prediction Handwriting recognition Alignment of bio-sequences Time series analysis Activity recognition Protein folding Sequence classification Metamorphic virus detection Sequence motif discovery (DNA and proteins) DNA hybridization kinetics Chromatin state discovery Transportation forecasting Solar irradiance variability
Hidden Markov model : Hidden Markov models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. From the linguistics point of view, hidden Markov models are equivalent to stochastic regular grammar. In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.
Hidden Markov model : Given a Markov transition matrix and an invariant distribution on the states, a probability measure can be imposed on the set of subshifts. For example, consider the Markov chain given on the left on the states A , B 1 , B 2 ,B_ , with invariant distribution π = ( 2 / 7 , 4 / 7 , 1 / 7 ) . By ignoring the distinction between B 1 , B 2 ,B_ , this space of subshifts is projected on A , B 1 , B 2 ,B_ into another space of subshifts on A , B , and this projection also projects the probability measure down to a probability measure on the subshifts on A , B . The curious thing is that the probability measure on the subshifts on A , B is not created by a Markov chain on A , B , not even multiple orders. Intuitively, this is because if one observes a long sequence of B n , then one would become increasingly sure that the Pr ( A ∣ B n ) → 2 3 )\to , meaning that the observable part of the system can be affected by something infinitely in the past. Conversely, there exists a space of subshifts on 6 symbols, projected to subshifts on 2 symbols, such that any Markov measure on the smaller subshift has a preimage measure that is not Markov of any order (example 2.6).
Interacting particle system : In probability theory, an interacting particle system (IPS) is a stochastic process ( X ( t ) ) t ∈ R + ^ on some configuration space Ω = S G given by a site space, a countably-infinite-order graph G and a local state space, a compact metric space S . More precisely IPS are continuous-time Markov jump processes describing the collective behavior of stochastically interacting components. IPS are the continuous-time analogue of stochastic cellular automata. Among the main examples are the voter model, the contact process, the asymmetric simple exclusion process (ASEP), the Glauber dynamics and in particular the stochastic Ising model. IPS are usually defined via their Markov generator giving rise to a unique Markov process using Markov semigroups and the Hille-Yosida theorem. The generator again is given via so-called transition rates c Λ ( η , ξ ) > 0 (\eta ,\xi )>0 where Λ ⊂ G is a finite set of sites and η , ξ ∈ Ω with η i = ξ i =\xi _ for all i ∉ Λ . The rates describe exponential waiting times of the process to jump from configuration η into configuration ξ . More generally the transition rates are given in form of a finite measure c Λ ( η , d ξ ) (\eta ,d\xi ) on S Λ . The generator L of an IPS has the following form. First, the domain of L is a subset of the space of "observables", that is, the set of real valued continuous functions on the configuration space Ω . Then for any observable f in the domain of L , one has L f ( η ) = ∑ Λ ∫ ξ : ξ Λ c = η Λ c c Λ ( η , d ξ ) [ f ( ξ ) − f ( η ) ] \int _=\eta _c_(\eta ,d\xi )[f(\xi )-f(\eta )] . For example, for the stochastic Ising model we have G = Z d ^ , S = , c Λ = 0 =0 if Λ ≠ for some i ∈ G and c i ( η , η i ) = exp ⁡ [ − β ∑ j : | j − i | = 1 η i η j ] (\eta ,\eta ^)=\exp[-\beta \sum _\eta _\eta _] where η i is the configuration equal to η except it is flipped at site i . β is a new parameter modeling the inverse temperature.
Interacting particle system : The voter model (usually in continuous time, but there are discrete versions as well) is a process similar to the contact process. In this process η ( x ) is taken to represent a voter's attitude on a particular topic. Voters reconsider their opinions at times distributed according to independent exponential random variables (this gives a Poisson process locally – note that there are in general infinitely many voters so no global Poisson process can be used). At times of reconsideration, a voter chooses one neighbor uniformly from amongst all neighbors and takes that neighbor's opinion. One can generalize the process by allowing the picking of neighbors to be something other than uniform.
Interacting particle system : Clifford, Peter; Aidan Sudbury (1973). "A Model for Spatial Conflict". Biometrika. 60 (3): 581–588. doi:10.1093/biomet/60.3.581. Durrett, Richard; Jeffrey E. Steif (1993). "Fixation Results for Threshold Voter Systems". The Annals of Probability. 21 (1): 232–247. doi:10.1214/aop/1176989403. Holley, Richard A.; Thomas M. Liggett (1975). "Ergodic Theorems for Weakly Interacting Infinite Systems and The Voter Model". The Annals of Probability. 3 (4): 643–663. doi:10.1214/aop/1176996306. Steif, Jeffrey E. (1994). "The Threshold Voter Automaton at a Critical Point". The Annals of Probability. 22 (3): 1121–1139. doi:10.1214/aop/1176988597. Liggett, Thomas M. (1997). "Stochastic Models of Interacting Systems". The Annals of Probability. 25 (1). Institute of Mathematical Statistics: 1–29. doi:10.1214/aop/1024404276. ISSN 0091-1798. Liggett, Thomas M. (1985). Interacting Particle Systems. New York: Springer Verlag. ISBN 0-387-96069-4.
Iterative Viterbi decoding : Iterative Viterbi decoding is an algorithm that spots the subsequence S of an observation O = having the highest average probability (i.e., probability scaled by the length of S) of being generated by a given hidden Markov model M with m states. The algorithm uses a modified Viterbi algorithm as an internal step. The scaled probability measure was first proposed by John S. Bridle. An early algorithm to solve this problem, sliding window, was proposed by Jay G. Wilpon et al., 1989, with constant cost T = mn2/2. A faster algorithm consists of an iteration of calls to the Viterbi algorithm, reestimating a filler score until convergence.
Iterative Viterbi decoding : A basic (non-optimized) version, finding the sequence s with the smallest normalized distance from some subsequence of t is: // input is placed in observation s[1..n], template t[1..m], // and [[distance matrix]] d[1..n,1..m] // remaining elements in matrices are solely for internal computations (int, int, int) AverageSubmatchDistance(char s[0..(n+1)], char t[0..(m+1)], int d[1..n,0..(m+1)]) The ViterbiDistance() procedure returns the tuple (e, B, E), i.e., the Viterbi score "e" for the match of t and the selected entry (B) and exit (E) points from it. "B" and "E" have to be recorded using a simple modification to Viterbi. A modification that can be applied to CYK tables, proposed by Antoine Rozenknop, consists in subtracting e from all elements of the initial matrix d.
Iterative Viterbi decoding : Silaghi, M., "Spotting Subsequences matching a HMM using the Average Observation Probability Criteria with application to Keyword Spotting", AAAI, 2005. Rozenknop, Antoine, and Silaghi, Marius; "Algorithme de décodage de treillis selon le critère de coût moyen pour la reconnaissance de la parole", TALN 2001.
Iterative Viterbi decoding : Li, Huan-Bang; Kohno, Ryuji (2006). An Efficient Code Structure of Block Coded Modulations with Iterative Viterbi Decoding Algorithm. 3rd International Symposium on Wireless Communication Systems. Valencia, Spain: IEEE. doi:10.1109/ISWCS.2006.4362391. ISBN 978-1-4244-0397-4. Wang, Qi; Wei, Lei; Kennedy, R.A. (January 2002). "Iterative Viterbi decoding, trellis shaping, and multilevel structure for high-rate parity-concatenated TCM". IEEE Transactions on Communications. 50 (1): 48–55. doi:10.1109/26.975743. ISSN 0090-6778.