text
stringlengths
100
500k
subset
stringclasses
4 values
Stone functor In mathematics, the Stone functor is a functor S: Topop → Bool, where Top is the category of topological spaces and Bool is the category of Boolean algebras and Boolean homomorphisms. It assigns to each topological space X the Boolean algebra S(X) of its clopen subsets, and to each morphism fop: X → Y in Topop (i.e., a continuous map f: Y → X) the homomorphism S(f): S(X) → S(Y) given by S(f)(Z) = f−1[Z]. See also • Stone's representation theorem for Boolean algebras • Pointless topology References • Abstract and Concrete Categories. The Joy of Cats Archived 2015-04-21 at the Wayback Machine. Jiri Adámek, Horst Herrlich, George E. Strecker. • Peter T. Johnstone, Stone Spaces. (1982) Cambridge university Press ISBN 0-521-23893-5
Wikipedia
Stone algebra In mathematics, a Stone algebra, or Stone lattice, is a pseudo-complemented distributive lattice such that a* ∨ a** = 1. They were introduced by Grätzer & Schmidt (1957) and named after Marshall Harvey Stone. Boolean algebras are Stone algebras, and Stone algebras are Ockham algebras. Examples: • The open-set lattice of an extremally disconnected space is a Stone algebra. • The lattice of positive divisors of a given positive integer is a Stone lattice. See also • De Morgan algebra • Heyting algebra References • Balbes, Raymond (1970), "A survey of Stone algebras", Proceedings of the Conference on Universal Algebra (Queen's Univ., Kingston, Ont., 1969), Kingston, Ont.: Queen's Univ., pp. 148–170, MR 0260638 • Fofanova, T.S. (2001) [1994], "Stone lattice", Encyclopedia of Mathematics, EMS Press • Grätzer, George; Schmidt, E. T. (1957), "On a problem of M. H. Stone", Acta Mathematica Academiae Scientiarum Hungaricae, 8: 455–460, doi:10.1007/BF02020328, ISSN 0001-5954, MR 0092763 • Grätzer, George (1971), Lattice theory. First concepts and distributive lattices, W. H. Freeman and Co., ISBN 978-0-486-47173-0, MR 0321817
Wikipedia
Stone space In topology and related areas of mathematics, a Stone space, also known as a profinite space[1] or profinite set, is a compact totally disconnected Hausdorff space.[2] Stone spaces are named after Marshall Harvey Stone who introduced and studied them in the 1930s in the course of his investigation of Boolean algebras, which culminated in his representation theorem for Boolean algebras. Equivalent conditions The following conditions on the topological space $X$ are equivalent:[2][1] • $X$ is a Stone space; • $X$ is homeomorphic to the projective limit (in the category of topological spaces) of an inverse system of finite discrete spaces; • $X$ is compact and totally separated; • $X$ is compact, T0 , and zero-dimensional (in the sense of the small inductive dimension); • $X$ is coherent and Hausdorff. Examples Important examples of Stone spaces include finite discrete spaces, the Cantor set and the space $\mathbb {Z} _{p}$ of $p$-adic integers, where $p$ is any prime number. Generalizing these examples, any product of finite discrete spaces is a Stone space, and the topological space underlying any profinite group is a Stone space. The Stone–Čech compactification of the natural numbers with the discrete topology, or indeed of any discrete space, is a Stone space. Stone's representation theorem for Boolean algebras Main article: Stone's representation theorem for Boolean algebras To every Boolean algebra $B$ we can associate a Stone space $S(B)$ as follows: the elements of $S(B)$ are the ultrafilters on $B,$ and the topology on $S(B),$ called the Stone topology, is generated by the sets of the form $\{F\in S(B):b\in F\},$ where $b\in B.$ Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the Boolean algebra of clopen sets of the Stone space $S(B)$; and furthermore, every Stone space $X$ is homeomorphic to the Stone space belonging to the Boolean algebra of clopen sets of $X.$ These assignments are functorial, and we obtain a category-theoretic duality between the category of Boolean algebras (with homomorphisms as morphisms) and the category of Stone spaces (with continuous maps as morphisms). Stone's theorem gave rise to a number of similar dualities, now collectively known as Stone dualities. Condensed mathematics The category of Stone spaces with continuous maps is equivalent to the pro-category of the category of finite sets, which explains the term "profinite sets". The profinite sets are at the heart of the project of condensed mathematics, which aims to replace topological spaces with "condensed sets", where a topological space X is replaced by the functor that takes a profinite set S to the set of continuous maps from S to X.[3] See also • Stone–Čech compactification#Construction using ultrafilters – a universal map from a topological space X to a compact Hausdorff space βX, such that any map from X to a compact Hausdorff space factors through βX uniquely; if X is Tychonoff, then X is a dense subspace of βXPages displaying wikidata descriptions as a fallback • Filters in topology – Use of filters to describe and characterize all basic topological notions and results. • Type (model theory) – term in model theory and related areas of mathematicsPages displaying wikidata descriptions as a fallback References 1. Stone space at the nLab 2. "Stone space", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 3. Scholze, Peter (2020-12-05). "Liquid tensor experiment". Xena.{{cite web}}: CS1 maint: url-status (link) Further reading • Johnstone, Peter (1982). Stone Spaces. Cambridge studies in advanced mathematics. Vol. 3. Cambridge University Press. ISBN 0-521-33779-8.
Wikipedia
Stone's theorem on one-parameter unitary groups In mathematics, Stone's theorem on one-parameter unitary groups is a basic theorem of functional analysis that establishes a one-to-one correspondence between self-adjoint operators on a Hilbert space ${\mathcal {H}}$ and one-parameter families $(U_{t})_{t\in \mathbb {R} }$ of unitary operators that are strongly continuous, i.e., $\forall t_{0}\in \mathbb {R} ,\psi \in {\mathcal {H}}:\qquad \lim _{t\to t_{0}}U_{t}(\psi )=U_{t_{0}}(\psi ),$ and are homomorphisms, i.e., $\forall s,t\in \mathbb {R} :\qquad U_{t+s}=U_{t}U_{s}.$ :\qquad U_{t+s}=U_{t}U_{s}.} Such one-parameter families are ordinarily referred to as strongly continuous one-parameter unitary groups. The theorem was proved by Marshall Stone (1930, 1932), and John von Neumann (1932) showed that the requirement that $(U_{t})_{t\in \mathbb {R} }$ be strongly continuous can be relaxed to say that it is merely weakly measurable, at least when the Hilbert space is separable. This is an impressive result, as it allows one to define the derivative of the mapping $t\mapsto U_{t},$ which is only supposed to be continuous. It is also related to the theory of Lie groups and Lie algebras. Formal statement The statement of the theorem is as follows.[1] Theorem. Let $(U_{t})_{t\in \mathbb {R} }$ be a strongly continuous one-parameter unitary group. Then there exists a unique (possibly unbounded) operator $A:{\mathcal {D}}_{A}\to {\mathcal {H}}$, that is self-adjoint on ${\mathcal {D}}_{A}$ and such that $\forall t\in \mathbb {R} :\qquad U_{t}=e^{itA}.$ :\qquad U_{t}=e^{itA}.} The domain of $A$ is defined by ${\mathcal {D}}_{A}=\left\{\psi \in {\mathcal {H}}\left|\lim _{\varepsilon \to 0}{\frac {-i}{\varepsilon }}\left(U_{\varepsilon }(\psi )-\psi \right){\text{ exists}}\right.\right\}.$ Conversely, let $A:{\mathcal {D}}_{A}\to {\mathcal {H}}$ be a (possibly unbounded) self-adjoint operator on ${\mathcal {D}}_{A}\subseteq {\mathcal {H}}.$ Then the one-parameter family $(U_{t})_{t\in \mathbb {R} }$ of unitary operators defined by $\forall t\in \mathbb {R} :\qquad U_{t}:=e^{itA}$ :\qquad U_{t}:=e^{itA}} is a strongly continuous one-parameter group. In both parts of the theorem, the expression $e^{itA}$ is defined by means of the spectral theorem for unbounded self-adjoint operators. The operator $A$ is called the infinitesimal generator of $(U_{t})_{t\in \mathbb {R} }.$ Furthermore, $A$ will be a bounded operator if and only if the operator-valued mapping $t\mapsto U_{t}$ is norm-continuous. The infinitesimal generator $A$ of a strongly continuous unitary group $(U_{t})_{t\in \mathbb {R} }$ may be computed as $A\psi =-i\lim _{\varepsilon \to 0}{\frac {U_{\varepsilon }\psi -\psi }{\varepsilon }},$ with the domain of $A$ consisting of those vectors $\psi $ for which the limit exists in the norm topology. That is to say, $A$ is equal to $-i$ times the derivative of $U_{t}$ with respect to $t$ at $t=0$. Part of the statement of the theorem is that this derivative exists—i.e., that $A$ is a densely defined self-adjoint operator. The result is not obvious even in the finite-dimensional case, since $U_{t}$ is only assumed (ahead of time) to be continuous, and not differentiable. Example The family of translation operators $\left[T_{t}(\psi )\right](x)=\psi (x+t)$ is a one-parameter unitary group of unitary operators; the infinitesimal generator of this family is an extension of the differential operator $-i{\frac {d}{dx}}$ defined on the space of continuously differentiable complex-valued functions with compact support on $\mathbb {R} .$ Thus $T_{t}=e^{t{\frac {d}{dx}}}.$ In other words, motion on the line is generated by the momentum operator. Applications Stone's theorem has numerous applications in quantum mechanics. For instance, given an isolated quantum mechanical system, with Hilbert space of states H, time evolution is a strongly continuous one-parameter unitary group on ${\mathcal {H}}$. The infinitesimal generator of this group is the system Hamiltonian. Further information: Stone–von Neumann theorem and Heisenberg group Using Fourier transform Stone's Theorem can be recast using the language of the Fourier transform. The real line $\mathbb {R} $ is a locally compact abelian group. Non-degenerate *-representations of the group C*-algebra $C^{*}(\mathbb {R} )$ are in one-to-one correspondence with strongly continuous unitary representations of $\mathbb {R} ,$ i.e., strongly continuous one-parameter unitary groups. On the other hand, the Fourier transform is a *-isomorphism from $C^{*}(\mathbb {R} )$ to $C_{0}(\mathbb {R} ),$ the $C^{*}$-algebra of continuous complex-valued functions on the real line that vanish at infinity. Hence, there is a one-to-one correspondence between strongly continuous one-parameter unitary groups and *-representations of $C_{0}(\mathbb {R} ).$ As every *-representation of $C_{0}(\mathbb {R} )$ corresponds uniquely to a self-adjoint operator, Stone's Theorem holds. Therefore, the procedure for obtaining the infinitesimal generator of a strongly continuous one-parameter unitary group is as follows: • Let $(U_{t})_{t\in \mathbb {R} }$ be a strongly continuous unitary representation of $\mathbb {R} $ on a Hilbert space ${\mathcal {H}}$. • Integrate this unitary representation to yield a non-degenerate *-representation $\rho $ of $C^{*}(\mathbb {R} )$ on ${\mathcal {H}}$ by first defining $\forall f\in C_{c}(\mathbb {R} ):\quad \rho (f):=\int _{\mathbb {R} }f(t)~U_{t}dt,$ and then extending $\rho $ to all of $C^{*}(\mathbb {R} )$ by continuity. • Use the Fourier transform to obtain a non-degenerate *-representation $\tau $ of $C_{0}(\mathbb {R} )$ on ${\mathcal {H}}$. • By the Riesz-Markov Theorem, $\tau $ gives rise to a projection-valued measure on $\mathbb {R} $ that is the resolution of the identity of a unique self-adjoint operator $A$, which may be unbounded. • Then $A$ is the infinitesimal generator of $(U_{t})_{t\in \mathbb {R} }.$ The precise definition of $C^{*}(\mathbb {R} )$ is as follows. Consider the *-algebra $C_{c}(\mathbb {R} ),$ the continuous complex-valued functions on $\mathbb {R} $ with compact support, where the multiplication is given by convolution. The completion of this *-algebra with respect to the $L^{1}$-norm is a Banach *-algebra, denoted by $(L^{1}(\mathbb {R} ),\star ).$ Then $C^{*}(\mathbb {R} )$ is defined to be the enveloping $C^{*}$-algebra of $(L^{1}(\mathbb {R} ),\star )$, i.e., its completion with respect to the largest possible $C^{*}$-norm. It is a non-trivial fact that, via the Fourier transform, $C^{*}(\mathbb {R} )$ is isomorphic to $C_{0}(\mathbb {R} ).$ A result in this direction is the Riemann-Lebesgue Lemma, which says that the Fourier transform maps $L^{1}(\mathbb {R} )$ to $C_{0}(\mathbb {R} ).$ Generalizations The Stone–von Neumann theorem generalizes Stone's theorem to a pair of self-adjoint operators, $(P,Q)$, satisfying the canonical commutation relation, and shows that these are all unitarily equivalent to the position operator and momentum operator on $L^{2}(\mathbb {R} ).$ The Hille–Yosida theorem generalizes Stone's theorem to strongly continuous one-parameter semigroups of contractions on Banach spaces. References 1. Hall 2013 Theorem 10.15 Bibliography • Hall, B.C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158 • von Neumann, John (1932), "Über einen Satz von Herrn M. H. Stone", Annals of Mathematics, Second Series (in German), Annals of Mathematics, 33 (3): 567–573, doi:10.2307/1968535, ISSN 0003-486X, JSTOR 1968535 • Stone, M. H. (1930), "Linear Transformations in Hilbert Space. III. Operational Methods and Group Theory", Proceedings of the National Academy of Sciences of the United States of America, National Academy of Sciences, 16 (2): 172–175, doi:10.1073/pnas.16.2.172, ISSN 0027-8424, JSTOR 85485, PMC 1075964, PMID 16587545 • Stone, M. H. (1932), "On one-parameter unitary groups in Hilbert Space", Annals of Mathematics, 33 (3): 643–648, doi:10.2307/1968538, JSTOR 1968538 • K. Yosida, Functional Analysis, Springer-Verlag, (1968) Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Extremally disconnected space In mathematics, an extremally disconnected space is a topological space in which the closure of every open set is open. (The term "extremally disconnected" is correct, even though the word "extremally" does not appear in most dictionaries,[1] and is sometimes mistaken by spellcheckers for the homophone extremely disconnected.) An extremally disconnected space that is also compact and Hausdorff is sometimes called a Stonean space. This is not the same as a Stone space, which is a totally disconnected compact Hausdorff space. Every Stonean space is a Stone space, but not vice versa. In the duality between Stone spaces and Boolean algebras, the Stonean spaces correspond to the complete Boolean algebras. An extremally disconnected first-countable collectionwise Hausdorff space must be discrete. In particular, for metric spaces, the property of being extremally disconnected (the closure of every open set is open) is equivalent to the property of being discrete (every set is open). Examples • Every discrete space is extremally disconnected. Every indiscrete space is both extremally disconnected and connected. • The Stone–Čech compactification of a discrete space is extremally disconnected. • The spectrum of an abelian von Neumann algebra is extremally disconnected. • Any commutative AW*-algebra is isomorphic to $C(X)$ where $X$ is extremally disconnected, compact and Hausdorff. • Any infinite space with the cofinite topology is both extremally disconnected and connected. More generally, every hyperconnected space is extremally disconnected. • The space on three points with base $\{\{x,y\},\{x,y,z\}\}$ provides a finite example of a space that is both extremally disconnected and connected. Another example is given by the sierpinski space, since it is finite, connected, and hyperconnected. Equivalent characterizations A theorem due to Gleason (1958) says that the projective objects of the category of compact Hausdorff spaces are exactly the extremally disconnected compact Hausdorff spaces. A simplified proof of this fact is given by Rainwater (1959). A compact Hausdorff space is extremally disconnected if and only if it is a retract of the Stone–Čech compactification of a discrete space.[2] Applications Hartig (1983) proves the Riesz–Markov–Kakutani representation theorem by reducing it to the case of extremally disconnected spaces, in which case the representation theorem can be proved by elementary means. See also • Totally disconnected space References 1. "extremally". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) 2. Semadeni (1971, Thm. 24.7.1) • A. V. Arkhangelskii (2001) [1994], "Extremally-disconnected space", Encyclopedia of Mathematics, EMS Press • Gleason, Andrew M. (1958), "Projective topological spaces", Illinois Journal of Mathematics, 2 (4A): 482–489, doi:10.1215/ijm/1255454110, MR 0121775 • Hartig, Donald G. (1983), "The Riesz representation theorem revisited", American Mathematical Monthly, 90 (4): 277–280, doi:10.2307/2975760, JSTOR 2975760 • Johnstone, Peter T. (1982). Stone spaces. Cambridge University Press. ISBN 0-521-23893-5. • Rainwater, John (1959), "A Note on Projective Resolutions", Proceedings of the American Mathematical Society, 10 (5): 734–735, doi:10.2307/2033466, JSTOR 2033466 • Semadeni, Zbigniew (1971), Banach spaces of continuous functions. Vol. I, PWN---Polish Scientific Publishers, Warsaw, MR 0296671
Wikipedia
Stoneham number In mathematics, the Stoneham numbers are a certain class of real numbers, named after mathematician Richard G. Stoneham (1920–1996). For coprime numbers b, c > 1, the Stoneham number αb,c is defined as $\alpha _{b,c}=\sum _{n=c^{k}>1}{\frac {1}{b^{n}n}}=\sum _{k=1}^{\infty }{\frac {1}{b^{c^{k}}c^{k}}}$ It was shown by Stoneham in 1973 that αb,c is b-normal whenever c is an odd prime and b is a primitive root of c2. In 2002, Bailey & Crandall showed that coprimality of b, c > 1 is sufficient for b-normality of αb,c.[1] References 1. Bailey, David H.; Crandall, Richard E. (2002). "Random Generators and Normal Numbers". Experimental Mathematics. 11 (4): 527–546. doi:10.1080/10586458.2002.10504704. S2CID 8944421. • Bailey, D. H.; Crandall, R. E. (2002), "Random generators and normal numbers" (PDF), Experimental Mathematics, 11 (4): 527–546, doi:10.1080/10586458.2002.10504704, S2CID 8944421. • Bugeaud, Yann (2012). Distribution modulo one and Diophantine approximation. Cambridge Tracts in Mathematics. Vol. 193. Cambridge: Cambridge University Press. ISBN 978-0-521-11169-0. Zbl 1260.11001. • Stoneham, R.G. (1973). "On absolute $(j,ε)$-normality in the rational fractions with applications to normal numbers". Acta Arithmetica. 22 (3): 277–286. doi:10.4064/aa-22-3-277-286. Zbl 0276.10028. • Stoneham, R.G. (1973). "On the uniform ε-distribution of residues within the periods of rational fractions with applications to normal numbers". Acta Arithmetica. 22 (4): 371–389. doi:10.4064/aa-22-4-371-389. Zbl 0276.10029.
Wikipedia
Stone–von Neumann theorem In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.[1][2][3][4] Representation issues of the commutation relations In quantum mechanics, physical observables are represented mathematically by linear operators on Hilbert spaces. For a single particle moving on the real line $\mathbb {R} $, there are two important observables: position and momentum. In the Schrödinger representation quantum description of such a particle, the position operator x and momentum operator $p$ are respectively given by ${\begin{aligned}[][x\psi ](x_{0})&=x_{0}\psi (x_{0})\\[][p\psi ](x_{0})&=-i\hbar {\frac {\partial \psi }{\partial x}}(x_{0})\end{aligned}}$ on the domain $V$ of infinitely differentiable functions of compact support on $\mathbb {R} $. Assume $\hbar $ to be a fixed non-zero real number—in quantum theory $\hbar $ is the reduced Planck's constant, which carries units of action (energy times time). The operators $x$, $p$ satisfy the canonical commutation relation Lie algebra, $[x,p]=xp-px=i\hbar .$ Already in his classic book,[5] Hermann Weyl observed that this commutation law was impossible to satisfy for linear operators p, x acting on finite-dimensional spaces unless ℏ vanishes. This is apparent from taking the trace over both sides of the latter equation and using the relation Trace(AB) = Trace(BA); the left-hand side is zero, the right-hand side is non-zero. Further analysis shows that any two self-adjoint operators satisfying the above commutation relation cannot be both bounded (in fact, a theorem of Wielandt shows the relation cannot be satisfied by elements of any normed algebra[note 1]). For notational convenience, the nonvanishing square root of ℏ may be absorbed into the normalization of p and x, so that, effectively, it is replaced by 1. We assume this normalization in what follows. The idea of the Stone–von Neumann theorem is that any two irreducible representations of the canonical commutation relations are unitarily equivalent. Since, however, the operators involved are necessarily unbounded (as noted above), there are tricky domain issues that allow for counter-examples.[6]: Example 14.5  To obtain a rigorous result, one must require that the operators satisfy the exponentiated form of the canonical commutation relations, known as the Weyl relations. The exponentiated operators are bounded and unitary. Although, as noted below, these relations are formally equivalent to the standard canonical commutation relations, this equivalence is not rigorous, because (again) of the unbounded nature of the operators. (There is also a discrete analog of the Weyl relations, which can hold in a finite-dimensional space,[6]: Chapter 14, Exercise 5  namely Sylvester's clock and shift matrices in the finite Heisenberg group, discussed below.) Uniqueness of representation One would like to classify representations of the canonical commutation relation by two self-adjoint operators acting on separable Hilbert spaces, up to unitary equivalence. By Stone's theorem, there is a one-to-one correspondence between self-adjoint operators and (strongly continuous) one-parameter unitary groups. Let Q and P be two self-adjoint operators satisfying the canonical commutation relation, [Q, P] = i, and s and t two real parameters. Introduce eitQ and eisP, the corresponding unitary groups given by functional calculus. (For the explicit operators x and p defined above, these are multiplication by eitx and pullback by translation x → x + s.) A formal computation[6]: Section 14.2  (using a special case of the Baker–Campbell–Hausdorff formula) readily yields $e^{itQ}e^{isP}=e^{-ist}e^{isP}e^{itQ}.$ Conversely, given two one-parameter unitary groups U(t) and V(s) satisfying the braiding relation $U(t)V(s)=e^{-ist}V(s)U(t)\qquad \forall s,t,$   (E1) formally differentiating at 0 shows that the two infinitesimal generators satisfy the above canonical commutation relation. This braiding formulation of the canonical commutation relations (CCR) for one-parameter unitary groups is called the Weyl form of the CCR. It is important to note that the preceding derivation is purely formal. Since the operators involved are unbounded, technical issues prevent application of the Baker–Campbell–Hausdorff formula without additional domain assumptions. Indeed, there exist operators satisfying the canonical commutation relation but not the Weyl relations (E1).[6]: Example 14.5  Nevertheless, in "good" cases, we expect that operators satisfying the canonical commutation relation will also satisfy the Weyl relations. The problem thus becomes classifying two jointly irreducible one-parameter unitary groups U(t) and V(s) which satisfy the Weyl relation on separable Hilbert spaces. The answer is the content of the Stone–von Neumann theorem: all such pairs of one-parameter unitary groups are unitarily equivalent.[6]: Theorem 14.8  In other words, for any two such U(t) and V(s) acting jointly irreducibly on a Hilbert space H, there is a unitary operator W : L2(R) → H so that $W^{*}U(t)W=e^{itx}\quad {\text{and}}\quad W^{*}V(s)W=e^{isp},$ where p and x are the explicit position and momentum operators from earlier. When W is U in this equation, so, then, in the x-representation, it is evident that P is unitarily equivalent to e−itQ P eitQ = P + t, and the spectrum of P must range along the entire real line. The analog argument holds for Q. There is also a straightforward extension of the Stone–von Neumann theorem to n degrees of freedom.[6]: Theorem 14.8  Historically, this result was significant, because it was a key step in proving that Heisenberg's matrix mechanics, which presents quantum mechanical observables and dynamics in terms of infinite matrices, is unitarily equivalent to Schrödinger's wave mechanical formulation (see Schrödinger picture), $[U(t)\psi ](x)=e^{itx}\psi (x),\qquad [V(s)\psi ](x)=\psi (x+s).$ See also: Generalizations of Pauli matrices § Construction: The clock and shift matrices Representation theory formulation In terms of representation theory, the Stone–von Neumann theorem classifies certain unitary representations of the Heisenberg group. This is discussed in more detail in the Heisenberg group section, below. Informally stated, with certain technical assumptions, every representation of the Heisenberg group H2n + 1 is equivalent to the position operators and momentum operators on Rn. Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension 2n. More formally, there is a unique (up to scale) non-trivial central strongly continuous unitary representation. This was later generalized by Mackey theory – and was the motivation for the introduction of the Heisenberg group in quantum physics. In detail: • The continuous Heisenberg group is a central extension of the abelian Lie group R2n by a copy of R, • the corresponding Heisenberg algebra is a central extension of the abelian Lie algebra R2n (with trivial bracket) by a copy of R, • the discrete Heisenberg group is a central extension of the free abelian group Z2n by a copy of Z, and • the discrete Heisenberg group modulo p is a central extension of the free abelian p-group (Z/pZ)2n by a copy of Z/pZ. In all cases, if one has a representation H2n + 1 → A, where A is an algebra and the center maps to zero, then one simply has a representation of the corresponding abelian group or algebra, which is Fourier theory. If the center does not map to zero, one has a more interesting theory, particularly if one restricts oneself to central representations. Concretely, by a central representation one means a representation such that the center of the Heisenberg group maps into the center of the algebra: for example, if one is studying matrix representations or representations by operators on a Hilbert space, then the center of the matrix algebra or the operator algebra is the scalar matrices. Thus the representation of the center of the Heisenberg group is determined by a scale value, called the quantization value (in physics terms, Planck's constant), and if this goes to zero, one gets a representation of the abelian group (in physics terms, this is the classical limit). More formally, the group algebra of the Heisenberg group over its field of scalars K, written K[H], has center K[R], so rather than simply thinking of the group algebra as an algebra over the field K, one may think of it as an algebra over the commutative algebra K[R]. As the center of a matrix algebra or operator algebra is the scalar matrices, a K[R]-structure on the matrix algebra is a choice of scalar matrix – a choice of scale. Given such a choice of scale, a central representation of the Heisenberg group is a map of K[R]-algebras K[H] → A, which is the formal way of saying that it sends the center to a chosen scale. Then the Stone–von Neumann theorem is that, given the standard quantum mechanical scale (effectively, the value of ħ), every strongly continuous unitary representation is unitarily equivalent to the standard representation with position and momentum. Reformulation via Fourier transform Let G be a locally compact abelian group and G^ be the Pontryagin dual of G. The Fourier–Plancherel transform defined by $f\mapsto {\hat {f}}(\gamma )=\int _{G}{\overline {\gamma (t)}}f(t)d\mu (t)$ extends to a C*-isomorphism from the group C*-algebra C*(G) of G and C0(G^), i.e. the spectrum of C*(G) is precisely G^. When G is the real line R, this is Stone's theorem characterizing one-parameter unitary groups. The theorem of Stone–von Neumann can also be restated using similar language. The group G acts on the C*-algebra C0(G) by right translation ρ: for s in G and f in C0(G), $(s\cdot f)(t)=f(t+s).$ Under the isomorphism given above, this action becomes the natural action of G on C*(G^): ${\widehat {(s\cdot f)}}(\gamma )=\gamma (s){\hat {f}}(\gamma ).$ So a covariant representation corresponding to the C*-crossed product $C^{*}\left({\hat {G}}\right)\rtimes _{\hat {\rho }}G$ is a unitary representation U(s) of G and V(γ) of G^ such that $U(s)V(\gamma )U^{*}(s)=\gamma (s)V(\gamma ).$ It is a general fact that covariant representations are in one-to-one correspondence with *-representation of the corresponding crossed product. On the other hand, all irreducible representations of $C_{0}(G)\rtimes _{\rho }G$ are unitarily equivalent to the ${\mathcal {K}}\left(L^{2}(G)\right)$, the compact operators on L2(G)). Therefore, all pairs {U(s), V(γ)} are unitarily equivalent. Specializing to the case where G = R yields the Stone–von Neumann theorem. The Heisenberg group The above canonical commutation relations for P, Q are identical to the commutation relations that specify the Lie algebra of the general Heisenberg group H2n+1 for n a positive integer. This is the Lie group of (n + 2) × (n + 2) square matrices of the form $\mathrm {M} (a,b,c)={\begin{bmatrix}1&a&c\\0&1_{n}&b\\0&0&1\end{bmatrix}}.$ In fact, using the Heisenberg group, one can reformulate the Stone von Neumann theorem in the language of representation theory. Note that the center of H2n+1 consists of matrices M(0, 0, c). However, this center is not the identity operator in Heisenberg's original CCRs. The Heisenberg group Lie algebra generators, e.g. for n = 1, are ${\begin{aligned}P&={\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}},&Q&={\begin{bmatrix}0&0&0\\0&0&1\\0&0&0\end{bmatrix}},&z&={\begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}},\end{aligned}}$ and the central generator z = log M(0, 0, 1) = exp(z) − 1 is not the identity. Theorem —  For each non-zero real number h there is an irreducible representation Uh acting on the Hilbert space L2(Rn) by $\left[U_{h}(\mathrm {M} (a,b,c))\right]\psi (x)=e^{i(b\cdot x+hc)}\psi (x+ha).$ All these representations are unitarily inequivalent; and any irreducible representation which is not trivial on the center of Hn is unitarily equivalent to exactly one of these. Note that Uh is a unitary operator because it is the composition of two operators which are easily seen to be unitary: the translation to the left by ha and multiplication by a function of absolute value 1. To show Uh is multiplicative is a straightforward calculation. The hard part of the theorem is showing the uniqueness; this claim, nevertheless, follows easily from the Stone–von Neumann theorem as stated above. We will sketch below a proof of the corresponding Stone–von Neumann theorem for certain finite Heisenberg groups. In particular, irreducible representations π, π′ of the Heisenberg group Hn which are non-trivial on the center of Hn are unitarily equivalent if and only if π(z) = π′(z) for any z in the center of Hn. One representation of the Heisenberg group which is important in number theory and the theory of modular forms is the theta representation, so named because the Jacobi theta function is invariant under the action of the discrete subgroup of the Heisenberg group. Relation to the Fourier transform For any non-zero h, the mapping $\alpha _{h}:\mathrm {M} (a,b,c)\to \mathrm {M} \left(-h^{-1}b,ha,c-a\cdot b\right)$ is an automorphism of Hn which is the identity on the center of Hn. In particular, the representations Uh and Uhα are unitarily equivalent. This means that there is a unitary operator W on L2(Rn) such that, for any g in Hn, $WU_{h}(g)W^{*}=U_{h}\alpha (g).$ Moreover, by irreducibility of the representations Uh, it follows that up to a scalar, such an operator W is unique (cf. Schur's lemma). Since W is unitary, this scalar multiple is uniquely determined and hence such an operator W is unique. Theorem —  The operator W is the Fourier transform on L2(Rn). This means that, ignoring the factor of (2π)n/2 in the definition of the Fourier transform, $\int _{\mathbf {R} ^{n}}e^{-ix\cdot p}e^{i(b\cdot x+hc)}\psi (x+ha)\ dx=e^{i(ha\cdot p+h(c-b\cdot a))}\int _{\mathbf {R} ^{n}}e^{-iy\cdot (p-b)}\psi (y)\ dy.$ This theorem has the immediate implication that the Fourier transform is unitary, also known as the Plancherel theorem. Moreover, $(\alpha _{h})^{2}\mathrm {M} (a,b,c)=\mathrm {M} (-a,-b,c).$ Theorem —  The operator W1 such that $W_{1}U_{h}W_{1}^{*}=U_{h}\alpha ^{2}(g)$ is the reflection operator $[W_{1}\psi ](x)=\psi (-x).$ From this fact the Fourier inversion formula easily follows. Example: The Segal–Bargmann space The Segal–Bargmann space is the space of holomorphic functions on Cn that are square-integrable with respect to a Gaussian measure. Fock observed in 1920s that the operators $a_{j}={\frac {\partial }{\partial z_{j}}},\qquad a_{j}^{*}=z_{j},$ acting on holomorphic functions, satisfy the same commutation relations as the usual annihilation and creation operators, namely, $\left[a_{j},a_{k}^{*}\right]=\delta _{j,k}.$ In 1961, Bargmann showed that a∗ j is actually the adjoint of aj with respect to the inner product coming from the Gaussian measure. By taking appropriate linear combinations of aj and a∗ j , one can then obtain "position" and "momentum" operators satisfying the canonical commutation relations. It is not hard to show that the exponentials of these operators satisfy the Weyl relations and that the exponentiated operators act irreducibly.[6]: Section 14.4  The Stone–von Neumann theorem therefore applies and implies the existence of a unitary map from L2(Rn) to the Segal–Bargmann space that intertwines the usual annihilation and creation operators with the operators aj and a∗ j . This unitary map is the Segal–Bargmann transform. Representations of finite Heisenberg groups The Heisenberg group Hn(K) is defined for any commutative ring K. In this section let us specialize to the field K = Z/pZ for p a prime. This field has the property that there is an embedding ω of K as an additive group into the circle group T. Note that Hn(K) is finite with cardinality |K|2n + 1. For finite Heisenberg group Hn(K) one can give a simple proof of the Stone–von Neumann theorem using simple properties of character functions of representations. These properties follow from the orthogonality relations for characters of representations of finite groups. For any non-zero h in K define the representation Uh on the finite-dimensional inner product space ℓ2(Kn) by $\left[U_{h}\mathrm {M} (a,b,c)\psi \right](x)=\omega (b\cdot x+hc)\psi (x+ha).$ Theorem —  For a fixed non-zero h, the character function χ of Uh is given by: $\chi (\mathrm {M} (a,b,c))={\begin{cases}|K|^{n}\,\omega (hc)&{\text{if }}a=b=0\\0&{\text{otherwise}}\end{cases}}$ It follows that ${\frac {1}{\left|H_{n}(\mathbf {K} )\right|}}\sum _{g\in H_{n}(K)}|\chi (g)|^{2}={\frac {1}{|K|^{2n+1}}}|K|^{2n}|K|=1.$ By the orthogonality relations for characters of representations of finite groups this fact implies the corresponding Stone–von Neumann theorem for Heisenberg groups Hn(Z/pZ), particularly: • Irreducibility of Uh • Pairwise inequivalence of all the representations Uh. Actually, all irreducible representations of Hn(K) on which the center acts nontrivially arise in this way.[6]: Chapter 14, Exercise 5  Generalizations The Stone–von Neumann theorem admits numerous generalizations. Much of the early work of George Mackey was directed at obtaining a formulation[7] of the theory of induced representations developed originally by Frobenius for finite groups to the context of unitary representations of locally compact topological groups. See also • Oscillator representation • Wigner–Weyl transform • CCR and CAR algebras (for bosons and fermions respectively) • Segal–Bargmann space • Moyal product • Weyl algebra • Stone's theorem on one-parameter unitary groups • Hille–Yosida theorem • C0-semigroup Notes 1. [xn, p] = i ℏ nxn − 1, hence 2‖p‖ ‖x‖n ≥ n ℏ ‖x‖n − 1, so that, ∀n: 2‖p‖ ‖x‖ ≥ n ℏ. References 1. von Neumann, J. (1931), "Die Eindeutigkeit der Schrödingerschen Operatoren", Mathematische Annalen, Springer Berlin / Heidelberg, 104: 570–578, doi:10.1007/BF01457956, ISSN 0025-5831, S2CID 120528257 2. von Neumann, J. (1932), "Ueber Einen Satz Von Herrn M. H. Stone", Annals of Mathematics, Second Series (in German), Annals of Mathematics, 33 (3): 567–573, doi:10.2307/1968535, ISSN 0003-486X, JSTOR 1968535 3. Stone, M. H. (1930), "Linear Transformations in Hilbert Space. III. Operational Methods and Group Theory", Proceedings of the National Academy of Sciences of the United States of America, National Academy of Sciences, 16 (2): 172–175, Bibcode:1930PNAS...16..172S, doi:10.1073/pnas.16.2.172, ISSN 0027-8424, JSTOR 85485, PMC 1075964, PMID 16587545 4. Stone, M. H. (1932), "On one-parameter unitary groups in Hilbert Space", Annals of Mathematics, 33 (3): 643–648, doi:10.2307/1968538, JSTOR 1968538 5. Weyl, H. (1927), "Quantenmechanik und Gruppentheorie", Zeitschrift für Physik, 46 (1927) pp. 1–46, doi:10.1007/BF02055756; Weyl, H., The Theory of Groups and Quantum Mechanics, Dover Publications, 1950, ISBN 978-1-163-18343-4. 6. Hall, B.C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158 7. Mackey, G. W. (1976). The Theory of Unitary Group Representations, The University of Chicago Press, 1976. • Kirillov, A. A. (1976), Elements of the theory of representations, Grundlehren der Mathematischen Wissenschaften, vol. 220, Berlin, New York: Springer-Verlag, ISBN 978-0-387-07476-4, MR 0407202 • Rosenberg, Jonathan (2004) "A Selective History of the Stone–von Neumann Theorem" Contemporary Mathematics 365. American Mathematical Society. • Summers, Stephen J. (2001). "On the Stone–von Neumann Uniqueness Theorem and Its Ramifications." In John von Neumann and the foundations of quantum physics, pp. 135-152. Springer, Dordrecht, 2001, online. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Stone–Čech compactification In the mathematical discipline of general topology, Stone–Čech compactification (or Čech–Stone compactification[1]) is a technique for constructing a universal map from a topological space X to a compact Hausdorff space βX. The Stone–Čech compactification βX of a topological space X is the largest, most general compact Hausdorff space "generated" by X, in the sense that any continuous map from X to a compact Hausdorff space factors through βX (in a unique way). If X is a Tychonoff space then the map from X to its image in βX is a homeomorphism, so X can be thought of as a (dense) subspace of βX; every other compact Hausdorff space that densely contains X is a quotient of βX. For general topological spaces X, the map from X to βX need not be injective. A form of the axiom of choice is required to prove that every topological space has a Stone–Čech compactification. Even for quite simple spaces X, an accessible concrete description of βX often remains elusive. In particular, proofs that βX \ X is nonempty do not give an explicit description of any particular point in βX \ X. The Stone–Čech compactification occurs implicitly in a paper by Andrey Nikolayevich Tychonoff (1930) and was given explicitly by Marshall Stone (1937) and Eduard Čech (1937). History Andrey Nikolayevich Tikhonov introduced completely regular spaces in 1930 in order to avoid the pathological situation of Hausdorff spaces whose only continuous real-valued functions are constant maps.[2] In the same 1930 article where Tychonoff defined completely regular spaces, he also proved that every Tychonoff space (i.e. Hausdorff completely regular space) has a Hausdorff compactification (in this same article, he also proved Tychonoff's theorem). In 1937, Čech extended Tychonoff's technique and introduced the notation βX for this compactification. Stone also constructed βX in a 1937 article, although using a very different method. Despite Tychonoff's article being the first work on the subject of the Stone–Čech compactification and despite Tychonoff's article being referenced by both Stone and Čech, Tychonoff's name is rarely associated with βX.[3] Universal property and functoriality The Stone–Čech compactification of the topological space X is a compact Hausdorff space βX together with a continuous map iX : X → βX that has the following universal property: any continuous map f : X → K, where K is a compact Hausdorff space, extends uniquely to a continuous map βf : βX → K, i.e. (βf)iX = f .[4] As is usual for universal properties, this universal property characterizes βX up to homeomorphism. As is outlined in § Constructions, below, one can prove (using the axiom of choice) that such a Stone–Čech compactification iX : X → βX exists for every topological space X. Furthermore, the image iX(X) is dense in βX. Some authors add the assumption that the starting space X be Tychonoff (or even locally compact Hausdorff), for the following reasons: • The map from X to its image in βX is a homeomorphism if and only if X is Tychonoff. • The map from X to its image in βX is a homeomorphism to an open subspace if and only if X is locally compact Hausdorff. The Stone–Čech construction can be performed for more general spaces X, but in that case the map X → βX need not be a homeomorphism to the image of X (and sometimes is not even injective). As is usual for universal constructions like this, the extension property makes β a functor from Top (the category of topological spaces) to CHaus (the category of compact Hausdorff spaces). Further, if we let U be the inclusion functor from CHaus into Top, maps from βX to K (for K in CHaus) correspond bijectively to maps from X to UK (by considering their restriction to X and using the universal property of βX). i.e. Hom(βX, K) ≅ Hom(X, UK), which means that β is left adjoint to U. This implies that CHaus is a reflective subcategory of Top with reflector β. Examples If X is a compact Hausdorff space, then it coincides with its Stone–Čech compactification.[5] Most other Stone–Čech compactifications lack concrete descriptions and are extremely unwieldy. Exceptions include: The Stone–Čech compactification of the first uncountable ordinal $\omega _{1}$, with the order topology, is the ordinal $\omega _{1}+1$. The Stone–Čech compactification of the deleted Tychonoff plank is the Tychonoff plank.[6] Constructions Construction using products One attempt to construct the Stone–Čech compactification of X is to take the closure of the image of X in $\prod \nolimits _{f:X\to K}K$ where the product is over all maps from X to compact Hausdorff spaces K (or, equivalently, the image of X by the right Kan extension of the identity functor of the category CHaus of compact Hausdorff spaces along the inclusion functor of CHaus into the category Top of general topological spaces). By Tychonoff's theorem this product of compact spaces is compact, and the closure of X in this space is therefore also compact. This works intuitively but fails for the technical reason that the collection of all such maps is a proper class rather than a set. There are several ways to modify this idea to make it work; for example, one can restrict the compact Hausdorff spaces K to have underlying set P(P(X)) (the power set of the power set of X), which is sufficiently large that it has cardinality at least equal to that of every compact Hausdorff space to which X can be mapped with dense image. Construction using the unit interval One way of constructing βX is to let C be the set of all continuous functions from X into [0, 1] and consider the map $e:X\to [0,1]^{C}$ where $e(x):f\mapsto f(x)$ This may be seen to be a continuous map onto its image, if [0, 1]C is given the product topology. By Tychonoff's theorem we have that [0, 1]C is compact since [0, 1] is. Consequently, the closure of X in [0, 1]C is a compactification of X. In fact, this closure is the Stone–Čech compactification. To verify this, we just need to verify that the closure satisfies the appropriate universal property. We do this first for K = [0, 1], where the desired extension of f : X → [0, 1] is just the projection onto the f coordinate in [0, 1]C. In order to then get this for general compact Hausdorff K we use the above to note that K can be embedded in some cube, extend each of the coordinate functions and then take the product of these extensions. The special property of the unit interval needed for this construction to work is that it is a cogenerator of the category of compact Hausdorff spaces: this means that if A and B are compact Hausdorff spaces, and f and g are distinct maps from A to B, then there is a map h : B → [0, 1] such that hf and hg are distinct. Any other cogenerator (or cogenerating set) can be used in this construction. Construction using ultrafilters See also: Stone topology and Filters in topology § Stone topology Alternatively, if $X$ is discrete, then it is possible to construct $\beta X$ as the set of all ultrafilters on $X,$ with the elements of $X$ corresponding to the principal ultrafilters. The topology on the set of ultrafilters, known as the Stone topology, is generated by sets of the form $\{F:U\in F\}$ for $U$ a subset of $X.$ Again we verify the universal property: For $f:X\to K$ with $K$ compact Hausdorff and $F$ an ultrafilter on $X$ we have an ultrafilter base $f(F)$ on $K,$ the pushforward of $F.$ This has a unique limit because $K$ is compact Hausdorff, say $x,$ and we define $\beta f(F)=x.$ This may be verified to be a continuous extension of $f.$ Equivalently, one can take the Stone space of the complete Boolean algebra of all subsets of $X$ as the Stone–Čech compactification. This is really the same construction, as the Stone space of this Boolean algebra is the set of ultrafilters (or equivalently prime ideals, or homomorphisms to the 2 element Boolean algebra) of the Boolean algebra, which is the same as the set of ultrafilters on $X.$ The construction can be generalized to arbitrary Tychonoff spaces by using maximal filters of zero sets instead of ultrafilters.[7] (Filters of closed sets suffice if the space is normal.) Construction using C*-algebras The Stone–Čech compactification is naturally homeomorphic to the spectrum of Cb(X).[8] Here Cb(X) denotes the C*-algebra of all continuous bounded complex-valued functions on X with sup-norm. Notice that Cb(X) is canonically isomorphic to the multiplier algebra of C0(X). The Stone–Čech compactification of the natural numbers In the case where X is locally compact, e.g. N or R, the image of X forms an open subset of βX, or indeed of any compactification, (this is also a necessary condition, as an open subset of a compact Hausdorff space is locally compact). In this case one often studies the remainder of the space, βX \ X. This is a closed subset of βX, and so is compact. We consider N with its discrete topology and write βN \ N = N* (but this does not appear to be standard notation for general X). As explained above, one can view βN as the set of ultrafilters on N, with the topology generated by sets of the form $\{F:U\in F\}$ for U a subset of N. The set N corresponds to the set of principal ultrafilters, and the set N* to the set of free ultrafilters. The study of βN, and in particular N*, is a major area of modern set-theoretic topology. The major results motivating this are Parovicenko's theorems, essentially characterising its behaviour under the assumption of the continuum hypothesis. These state: • Every compact Hausdorff space of weight at most $\aleph _{1}$ (see Aleph number) is the continuous image of N* (this does not need the continuum hypothesis, but is less interesting in its absence). • If the continuum hypothesis holds then N* is the unique Parovicenko space, up to isomorphism. These were originally proved by considering Boolean algebras and applying Stone duality. Jan van Mill has described βN as a "three headed monster"—the three heads being a smiling and friendly head (the behaviour under the assumption of the continuum hypothesis), the ugly head of independence which constantly tries to confuse you (determining what behaviour is possible in different models of set theory), and the third head is the smallest of all (what you can prove about it in ZFC).[9] It has relatively recently been observed that this characterisation isn't quite right—there is in fact a fourth head of βN, in which forcing axioms and Ramsey type axioms give properties of βN almost diametrically opposed to those under the continuum hypothesis, giving very few maps from N* indeed. Examples of these axioms include the combination of Martin's axiom and the Open colouring axiom which, for example, prove that (N*)2 ≠ N*, while the continuum hypothesis implies the opposite. An application: the dual space of the space of bounded sequences of reals The Stone–Čech compactification βN can be used to characterize $\ell ^{\infty }(\mathbf {N} )$ (the Banach space of all bounded sequences in the scalar field R or C, with supremum norm) and its dual space. Given a bounded sequence $a\in \ell ^{\infty }(\mathbf {N} )$ there exists a closed ball B in the scalar field that contains the image of a. a is then a function from N to B. Since N is discrete and B is compact and Hausdorff, a is continuous. According to the universal property, there exists a unique extension βa : βN → B. This extension does not depend on the ball B we consider. We have defined an extension map from the space of bounded scalar valued sequences to the space of continuous functions over βN. $\ell ^{\infty }(\mathbf {N} )\to C(\beta \mathbf {N} )$ This map is bijective since every function in C(βN) must be bounded and can then be restricted to a bounded scalar sequence. If we further consider both spaces with the sup norm the extension map becomes an isometry. Indeed, if in the construction above we take the smallest possible ball B, we see that the sup norm of the extended sequence does not grow (although the image of the extended function can be bigger). Thus, $\ell ^{\infty }(\mathbf {N} )$ can be identified with C(βN). This allows us to use the Riesz representation theorem and find that the dual space of $\ell ^{\infty }(\mathbf {N} )$ can be identified with the space of finite Borel measures on βN. Finally, it should be noticed that this technique generalizes to the L∞ space of an arbitrary measure space X. However, instead of simply considering the space βX of ultrafilters on X, the right way to generalize this construction is to consider the Stone space Y of the measure algebra of X: the spaces C(Y) and L∞(X) are isomorphic as C*-algebras as long as X satisfies a reasonable finiteness condition (that any set of positive measure contains a subset of finite positive measure). A monoid operation on the Stone–Čech compactification of the naturals The natural numbers form a monoid under addition. It turns out that this operation can be extended (generally in more than one way, but uniquely under a further condition) to βN, turning this space also into a monoid, though rather surprisingly a non-commutative one. For any subset, A, of N and a positive integer n in N, we define $A-n=\{k\in \mathbf {N} \mid k+n\in A\}.$ Given two ultrafilters F and G on N, we define their sum by $F+G={\Big \{}A\subseteq \mathbf {N} \mid \{n\in \mathbf {N} \mid A-n\in F\}\in G{\Big \}};$ it can be checked that this is again an ultrafilter, and that the operation + is associative (but not commutative) on βN and extends the addition on N; 0 serves as a neutral element for the operation + on βN. The operation is also right-continuous, in the sense that for every ultrafilter F, the map ${\begin{cases}\beta \mathbf {N} \to \beta \mathbf {N} \\G\mapsto F+G\end{cases}}$ is continuous. More generally, if S is a semigroup with the discrete topology, the operation of S can be extended to βS, getting a right-continuous associative operation.[10] See also • Compactification (mathematics) – Embedding a topological space into a compact space as a dense subset • Corona set of a space, the complement of its image in the Stone–Čech compactification. • Filters in topology – Use of filters to describe and characterize all basic topological notions and results. • One-point compactification – Way to extend a non-compact topological spacePages displaying short descriptions of redirect targets • Wallman compactification – A compactification of T1 topological spaces Notes 1. M. Henriksen, "Rings of continuous functions in the 1950s", in Handbook of the History of General Topology, edited by C. E. Aull, R. Lowen, Springer Science & Business Media, 2013, p. 246 2. Narici & Beckenstein 2011, p. 240. 3. Narici & Beckenstein 2011, pp. 225–273. 4. Munkres 2000, pp. 239, Theorem 38.4. 5. Munkres 2000, pp. 241. 6. Walker, R. C. (1974). The Stone-Čech Compactification. Springer. pp. 95–97. ISBN 978-3-642-61935-9. 7. W.W. Comfort, S. Negrepontis, The Theory of Ultrafilters, Springer, 1974. 8. This is Stone's original construction. 9. van Mill, Jan (1984), "An introduction to βω", in Kunen, Kenneth; Vaughan, Jerry E. (eds.), Handbook of Set-Theoretic Topology, North-Holland, pp. 503–560, ISBN 978-0-444-86580-9 10. Hindman, Neil; Strauss, Dona (2011-01-21). Algebra in the Stone-Cech Compactification. Berlin, Boston: DE GRUYTER. doi:10.1515/9783110258356. ISBN 978-3-11-025835-6. References • Čech, Eduard (1937), "On bicompact spaces", Annals of Mathematics, 38 (4): 823–844, doi:10.2307/1968839, hdl:10338.dmlcz/100420, JSTOR 1968839 • Dunford, Nelson; Schwarz, Jacob T. (1988). Linear Operators part I:general theory (Wiley Classics ed.). John Wiley & Sons. p. 276. • Hindman, Neil; Strauss, Dona (1998), Algebra in the Stone–Cech compactification. Theory and applications, de Gruyter Expositions in Mathematics, vol. 27 (2nd revised and extended 2012 ed.), Berlin: Walter de Gruyter & Co., pp. xiv+485 pp, doi:10.1515/9783110809220, ISBN 978-3-11-015420-7, MR 1642231 • Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. • Koshevnikova, I.G. (2001) [1994], "Stone-Čech compactification", Encyclopedia of Mathematics, EMS Press • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Shields, Allen (1987), "Years ago", Mathematical Intelligencer, 9 (2): 61–63, doi:10.1007/BF03025901, S2CID 189886579 • Stone, Marshall H. (1937), "Applications of the theory of Boolean rings to general topology", Transactions of the American Mathematical Society, 41 (3): 375–481, doi:10.2307/1989788, JSTOR 1989788 • Tychonoff, Andrey (1930), "Über die topologische Erweiterung von Räumen", Mathematische Annalen, 102: 544–561, doi:10.1007/BF01782364, ISSN 0025-5831, S2CID 124737286 External links • Stone-Čech Compactification at Planet Math • Dror Bar-Natan, Ultrafilters, Compactness, and the Stone–Čech compactification
Wikipedia
Octagrammic prism In geometry, the octagrammic prism is one of an infinite set of nonconvex prisms formed by square sides and two regular star polygon caps, in this case two octagrams. Octagrammic prism TypeUniform polyhedron Faces2 octagrams 8 squares Edges24 Vertices16 Vertex configuration8/3.4.4 Wythoff symbol2 8/3 | 2 Schläfli symbols{2,16/3} sr{2,8/3} Coxeter diagram Symmetry groupD8h Dual polyhedronoctagrammic bipyramid Propertiesnonconvex Vertex figure
Wikipedia
Stopped process In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed (possibly random) time. Definition Let • $(\Omega ,{\mathcal {F}},\mathbb {P} )$ be a probability space; • $(\mathbb {X} ,{\mathcal {A}})$ be a measurable space; • $X:[0,+\infty )\times \Omega \to \mathbb {X} $ be a stochastic process; • $\tau :\Omega \to [0,+\infty ]$ :\Omega \to [0,+\infty ]} be a stopping time with respect to some filtration $\{{\mathcal {F}}_{t}|t\geq 0\}$ of ${}{\mathcal {F}}$. Then the stopped process $X^{\tau }$ is defined for $t\geq 0$ and $\omega \in \Omega $ by $X_{t}^{\tau }(\omega ):=X_{\min\{t,\tau (\omega )\}}(\omega ).$ Examples Gambling Consider a gambler playing roulette. Xt denotes the gambler's total holdings in the casino at time t ≥ 0, which may or may not be allowed to be negative, depending on whether or not the casino offers credit. Let Yt denote what the gambler's holdings would be if he/she could obtain unlimited credit (so Y can attain negative values). • Stopping at a deterministic time: suppose that the casino is prepared to lend the gambler unlimited credit, and that the gambler resolves to leave the game at a predetermined time T, regardless of the state of play. Then X is really the stopped process YT, since the gambler's account remains in the same state after leaving the game as it was in at the moment that the gambler left the game. • Stopping at a random time: suppose that the gambler has no other sources of revenue, and that the casino will not extend its customers credit. The gambler resolves to play until and unless he/she goes broke. Then the random time $\tau (\omega ):=\inf\{t\geq 0|Y_{t}(\omega )=0\}$ is a stopping time for Y, and, since the gambler cannot continue to play after he/she has exhausted his/her resources, X is the stopped process Yτ. Brownian motion Let $B:[0,+\infty )\times \Omega \to \mathbb {R} $ be a one-dimensional standard Brownian motion starting at zero. • Stopping at a deterministic time $T>0$: if $\tau (\omega )\equiv T$, then the stopped Brownian motion $B^{\tau }$ will evolve as per usual up until time $T$, and thereafter will stay constant: i.e., $B_{t}^{\tau }(\omega )\equiv B_{T}(\omega )$ for all $t\geq T$. • Stopping at a random time: define a random stopping time $\tau $ by the first hitting time for the region $\{x\in \mathbb {R} |x\geq a\}$: $\tau (\omega ):=\inf\{t>0|B_{t}(\omega )\geq a\}.$ Then the stopped Brownian motion $B^{\tau }$ will evolve as per usual up until the random time $\tau $, and will thereafter be constant with value $a$: i.e., $B_{t}^{\tau }(\omega )\equiv a$ for all $t\geq \tau (\omega )$. See also • Killed process References • Robert G. Gallager. Stochastic Processes: Theory for Applications. Cambridge University Press, Dec 12, 2013 pg. 450
Wikipedia
σ-Algebra of τ-past The σ-algebra of τ-past, (also named stopped σ-algebra, stopped σ-field, or σ-field of τ-past) is a σ-algebra associated with a stopping time in the theory of stochastic processes, a branch of probability theory.[1][2] Definition Let $\tau $ be a stopping time on the filtered probability space $(\Omega ,{\mathcal {A}},({\mathcal {F}}_{t})_{t\in T},P)$. Then the σ-algebra ${\mathcal {F}}_{\tau }:=\{A\in {\mathcal {A}}\mid \forall t\in T\colon \{\tau \leq t\}\cap A\in {\mathcal {F}}_{t}\}$ is called the σ-algebra of τ-past.[1][2] Properties Monotonicity Is $\sigma ,\tau $ are two stopping times and $\sigma \leq \tau $ almost surely, then ${\mathcal {F}}_{\sigma }\subset {\mathcal {F}}_{\tau }.$ Measurability A stopping time $\tau $ is always ${\mathcal {F}}_{\tau }$-measurable. Intuition The same way ${\mathcal {F}}_{t}$ is all the information up to time $t$, ${\mathcal {F}}_{\tau }$ is all the information up time $\tau $. The only difference is that $\tau $ is random. For example, if you had a random walk, and you wanted to ask, “How many times did the random walk hit −5 before it first hit 10?”, then letting $\tau $ be the first time the random walk hit 10, ${\mathcal {F}}_{\tau }$ would give you the information to answer that question.[3] References 1. Karandikar, Rajeeva (2018). Introduction to Stochastic Calculus. Indian Statistical Institute Series. Singapore: Springer Nature. p. 47. doi:10.1007/978-981-10-8318-1. ISBN 978-981-10-8317-4. 2. Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 193. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6. 3. "Earnest, Mike (2017). Comment on StackExchange: Intuition regarding the σ algebra of the past (stopping times)".
Wikipedia
Word problem (mathematics education) In science education, a word problem is a mathematical exercise (such as in a textbook, worksheet, or exam) where significant background information on the problem is presented in ordinary language rather than in mathematical notation. As most word problems involve a narrative of some sort, they are sometimes referred to as story problems and may vary in the amount of technical language used. Not to be confused with word problem (mathematics), the problem of deciding whether two given expressions are equivalent in rewriting. Example A typical word problem: Tess paints two boards of a fence every four minutes, but Allie can paint three boards every two minutes. If there are 240 boards total, how many hours will it take them to paint the fence, working together? Solution process Word problems such as the above can be examined through five stages: • 1. Problem Comprehension • 2. Situational Solution Visualization • 3. Mathematical Solution Planning • 4. Solving for Solution • 5. Situational Solution Visualization The linguistic properties of a word problem need to be addressed first. To begin the solution process, one must first understand what the problem is asking and what type of solution the answer will be. In the problem above, the words "minutes", "total", "hours", and "together" need to be examined. The next step is to visualize what the solution to this problem might mean. For our stated problem, the solution might be visualized by examining if the total number of hours will be greater or smaller than if it were stated in minutes. Also, it must be determined whether or not the two girls will finish at a faster or slower rate if they are working together. After this, one must plan a solution method using mathematical terms. One scheme to analyze the mathematical properties is to classify the numerical quantities in the problem into known quantities (values given in the text), wanted quantities (values to be found), and auxiliary quantities (values found as intermediate stages of the problem). This is found in the "Variables" and "Equations" sections above. Next, the mathematical processes must be applied to the formulated solution process. This is done solely in the mathematical context for now. Finally, one must again visualize the proposed solution and determine if the solution seems to make sense for the realistic context of the problem. After visualizing if it is reasonable, one can then work to further analyze and draw connections between mathematical concepts and realistic problems.[1] The importance of these five steps in teacher education is discussed at the end of the following section. Purpose and skill development Word problems commonly include mathematical modelling questions, where data and information about a certain system is given and a student is required to develop a model. For example: 1. Jane had $5.00, then spent $2.00. How much does she have now? 2. In a cylindrical barrel with radius 2 m, the water is rising at a rate of 3 cm/s. What is the rate of increase of the volume of water? As the developmental skills of students across grade levels varies, the relevance to students and application of word problems also varies. The first example is accessible to primary school students, and may be used to teach the concept of subtraction. The second example can only be solved using geometric knowledge, specifically that of the formula for the volume of a cylinder with a given radius and height, and requires an understanding of the concept of "rate". There are numerous skills that can be developed to increase a students' understanding and fluency in solving word problems. The two major stems of these skills are cognitive skills and related academic skills. The cognitive domain consists of skills such as nonverbal reasoning and processing speed. Both of these skills work to strengthen numerous other fields of thought. Other cognitive skills include language comprehension, working memory, and attention. While these are not solely for the purpose of solving word problems, each one of them affects one's ability to solve such mathematical problems. For instance, if the one solving the math word problem has a limited understanding of the language (English, Spanish, etc.) they are more likely to not understand what the problem is even asking. In Example 1 (above), if one does not comprehend the definition of the word "spent," they will misunderstand the entire purpose of the word problem. This alludes to how the cognitive skills lead to the development of the mathematical concepts. Some of the related mathematical skills necessary for solving word problems are mathematical vocabulary and reading comprehension. This can again be connected to the example above. With an understanding of the word "spent" and the concept of subtraction, it can be deduced that this word problem is relating the two.[2] This leads to the conclusion that word problems are beneficial at each level of development, despite the fact that these domains will vary across developmental and academic stages. The discussion in this section and the previous one urge the examination of how these research findings can affect teacher education. One of the first ways is that when a teacher understands the solution structure of word problems, they are likely to have an increased understanding of their students' comprehension levels. Each of these research studies supported the finding that, in many cases, students do not often struggle with executing the mathematical procedures. Rather, the comprehension gap comes from not having a firm understanding of the connections between the math concepts and the semantics of the realistic problems. As a teacher examines a student's solution process, understanding each of the steps will help them understand how to best accommodate their specific learning needs. Another thing to address is the importance of teaching and promoting multiple solution processes. Procedural fluency is often times taught without an emphasis on conceptual and applicable comprehension. This leaves students with a gap between their mathematical understanding and their realistic problem solving skills. The ways in which teachers can best prepare for and promote this type of learning will not be discussed here.[1][3] History and culture The modern notation that enables mathematical ideas to be expressed symbolically was developed in Europe from the sixteenth century onwards. Prior to this, all mathematical problems and solutions were written out in words; the more complicated the problem, the more laborious and convoluted the verbal explanation. Examples of word problems can be found dating back to Babylonian times. Apart from a few procedure texts for finding things like square roots, most Old Babylonian problems are couched in a language of measurement of everyday objects and activities. Students had to find lengths of canals dug, weights of stones, lengths of broken reeds, areas of fields, numbers of bricks used in a construction, and so on. Ancient Egyptian mathematics also has examples of word problems. The Rhind Mathematical Papyrus includes a problem that can be translated as: There are seven houses; in each house there are seven cats; each cat kills seven mice; each mouse has eaten seven grains of barley; each grain would have produced seven hekat. What is the sum of all the enumerated things? In more modern times the sometimes confusing and arbitrary nature of word problems has been the subject of satire. Gustave Flaubert wrote this nonsensical problem, now known as the Age of the captain: Since you are now studying geometry and trigonometry, I will give you a problem. A ship sails the ocean. It left Boston with a cargo of wool. It grosses 200 tons. It is bound for Le Havre. The mainmast is broken, the cabin boy is on deck, there are 12 passengers aboard, the wind is blowing East-North-East, the clock points to a quarter past three in the afternoon. It is the month of May. How old is the captain? Word problems have also been satirised in The Simpsons, when a lengthy word problem ("An express train traveling 60 miles per hour leaves Santa Fe bound for Phoenix, 520 miles away. At the same time, a local train traveling 30 miles an hour carrying 40 passengers leaves Phoenix bound for Santa Fe...") trails off with a schoolboy character instead imagining that he is on the train. Both the original British and American versions of the game show Winning Lines involve word problems. However, the problems are worded so as to not give away obvious numerical information and thus, allow the contestants to figure out the numerical parts of the questions to come up with the answers. See also • Cut-the-knot References 1. Rich, Kathryn M.; Yadav, Aman (2020-05-01). "Applying Levels of Abstraction to Mathematics Word Problems". TechTrends. 64 (3): 395–403. doi:10.1007/s11528-020-00479-3. ISSN 1559-7075. S2CID 255311095. 2. Lin, Xin (2021-09-01). "Investigating the Unique Predictors of Word-Problem Solving Using Meta-Analytic Structural Equation Modeling". Educational Psychology Review. 33 (3): 1097–1124. doi:10.1007/s10648-020-09554-w. ISSN 1573-336X. S2CID 225195843. 3. Scheibling-Sève, Calliste; Pasquinelli, Elena; Sander, Emmanuel (March 2020). "Assessing conceptual knowledge through solving arithmetic word problems". Educational Studies in Mathematics. 103 (3): 293–311. doi:10.1007/s10649-020-09938-3. ISSN 0013-1954. S2CID 216314124. Further reading • L Verschaffel, B Greer, E De Corte (2000) Making Sense of Word Problems, Taylor & Francis • John C. Moyer; Margaret B. Moyer; Larry Sowder; Judith Threadgill-Sowder (1984) Story Problem Formats: Verbal versus Telegraphic Journal for Research in Mathematics Education, Vol. 15, No. 1. (Jan., 1984), pp. 64–68. JSTOR 748989 • Perla Nesher Eva Teubal (1975)Verbal Cues as an Interfering Factor in Verbal Problem Solving Educational Studies in Mathematics, Vol. 6, No. 1. (Mar., 1975), pp. 41–51. JSTOR 3482158 • Madis Lepik (1990) Algebraic Word Problems: Role of Linguistic and Structural Variables, Educational Studies in Mathematics, Vol. 21, No. 1. (Feb., 1990), pp. 83–90., JSTOR 3482220 • Duncan J Melville (1999) Old Babylonian Mathematics http://it.stlawu.edu/%7Edmelvill/mesomath/obsummary.html • Egyptian Algebra - Mathematicians of the African Diaspora • Mathematical Quotations - F • Andrew Nestler's Guide to Mathematics and Mathematicians on The Simpsons
Wikipedia
Strachey method for magic squares The Strachey method for magic squares is an algorithm for generating magic squares of singly even order 4k + 2. An example of magic square of order 6 constructed with the Strachey method: Example 3516261924 3327212325 3192222720 82833171015 30534121416 43629131811 Strachey's method of construction of singly even magic square of order n = 4k + 2. 1. Divide the grid into 4 quarters each having n2/4 cells and name them crosswise thus AC DB 2. Using the Siamese method (De la Loubère method) complete the individual magic squares of odd order 2k + 1 in subsquares A, B, C, D, first filling up the sub-square A with the numbers 1 to n2/4, then the sub-square B with the numbers n2/4 + 1 to 2n2/4,then the sub-square C with the numbers 2n2/4 + 1 to 3n2/4, then the sub-square D with the numbers 3n2/4 + 1 to n2. As a running example, we consider a 10×10 magic square, where we have divided the square into four quarters. The quarter A contains a magic square of numbers from 1 to 25, B a magic square of numbers from 26 to 50, C a magic square of numbers from 51 to 75, and D a magic square of numbers from 76 to 100. 172418156774515865 235714167355576466 461320225456637072 1012192136062697153 111825296168755259 92997683904249263340 98808289914830323941 79818895972931384547 85879496783537444628 869310077843643502734 3. Exchange the leftmost k columns in sub-square A with the corresponding columns of sub-square D. 929918156774515865 9880714167355576466 79811320225456637072 8587192136062697153 869325296168755259 17247683904249263340 2358289914830323941 468895972931384547 10129496783537444628 111810077843643502734 4. Exchange the rightmost k - 1 columns in sub-square C with the corresponding columns of sub-square B. 929918156774515840 9880714167355576441 79811320225456637047 8587192136062697128 869325296168755234 17247683904249263365 2358289914830323966 468895972931384572 10129496783537444653 111810077843643502759 5. Exchange the middle cell of the leftmost column of sub-square A with the corresponding cell of sub-square D. Exchange the central cell in sub-square A with the corresponding cell of sub-square D. 929918156774515840 9880714167355576441 4818820225456637047 8587192136062697128 869325296168755234 17247683904249263365 2358289914830323966 7961395972931384572 10129496783537444653 111810077843643502759 The result is a magic square of order n=4k + 2.[1] References 1. W W Rouse Ball Mathematical Recreations and Essays, (1911) See also • Conway's LUX method for magic squares • Siamese method
Wikipedia
Strahler number In mathematics, the Strahler number or Horton–Strahler number of a mathematical tree is a numerical measure of its branching complexity. These numbers were first developed in hydrology, as a way of measuring the complexity of rivers and streams, by Robert E. Horton (1945) and Arthur Newell Strahler (1952, 1957). In this application, they are referred to as the Strahler stream order and are used to define stream size based on a hierarchy of tributaries. The same numbers also arise in the analysis of L-systems and of hierarchical biological structures such as (biological) trees and animal respiratory and circulatory systems, in register allocation for compilation of high-level programming languages and in the analysis of social networks. Definition All trees in this context are directed graphs, oriented from the root towards the leaves; in other words, they are arborescences. The degree of a node in a tree is just its number of children. One may assign a Strahler number to all nodes of a tree, in bottom-up order, as follows: • If the node is a leaf (has no children), its Strahler number is one. • If the node has one child with Strahler number i, and all other children have Strahler numbers less than i, then the Strahler number of the node is i again. • If the node has two or more children with Strahler number i, and no children with greater number, then the Strahler number of the node is i + 1. The Strahler number of a tree is the number of its root node. Algorithmically, these numbers may be assigned by performing a depth-first search and assigning each node's number in postorder. The same numbers may also be generated via a pruning process in which the tree is simplified in a sequence of stages, where in each stage one removes all leaf nodes and all of the paths of degree-one nodes leading to leaves: the Strahler number of a node is the stage at which it would be removed by this process, and the Strahler number of a tree is the number of stages required to remove all of its nodes. Another equivalent definition of the Strahler number of a tree is that it is the height of the largest complete binary tree that can be homeomorphically embedded into the given tree; the Strahler number of a node in a tree is similarly the height of the largest complete binary tree that can be embedded below that node. Any node with Strahler number i must have at least two descendants with Strahler number i − 1, at least four descendants with Strahler number i − 2, etc., and at least 2i − 1 leaf descendants. Therefore, in a tree with n nodes, the largest possible Strahler number is log2 n + 1.[1] However, unless the tree forms a complete binary tree its Strahler number will be less than this bound. In an n-node binary tree, chosen uniformly at random among all possible binary trees, the expected index of the root is with high probability very close to log4 n.[2] Applications River networks In the application of the Strahler stream order to hydrology, each segment of a stream or river within a river network is treated as a node in a tree, with the next segment downstream as its parent. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream. Thus, if a first-order stream joins a second-order stream, it remains a second-order stream. It is not until a second-order stream combines with another second-order stream that it becomes a third-order stream. As with mathematical trees, a segment with index i must be fed by at least 2i − 1 different tributaries of index 1. Shreve noted that Horton’s and Strahler’s Laws should be expected from any topologically random distribution. A later review of the relationships confirmed this argument, establishing that, from the properties the laws describe, no conclusion can be drawn to explain the structure or origin of the stream network.[3][4] To qualify as a stream a hydrological feature must be either recurring or perennial. Recurring (or "intermittent") streams have water in the channel for at least part of the year. The index of a stream or river may range from 1 (a stream with no tributaries) to 12 (globally the most powerful river, the Amazon, at its mouth). The Ohio River is of order eight and the Mississippi River is of order 10. Estimates are that 80% of the streams on the planet are first to third order headwater streams.[5] If the bifurcation ratio of a river network is high, then there is a higher chance of flooding. There would also be a lower time of concentration.[6] The bifurcation ratio can also show which parts of a drainage basin are more likely to flood, comparatively, by looking at the separate ratios. Most British rivers have a bifurcation ratio of between 3 and 5.[7] Gleyzer et al. (2004) describe how to compute Strahler stream order values in a GIS application. This algorithm is implemented by RivEX, an ESRI ArcGIS 10.7 tool. The input to their algorithm is a network of the centre lines of the bodies of water, represented as arcs (or edges) joined at nodes. Lake boundaries and river banks should not be used as arcs, as these will generally form a non-tree network with an incorrect topology. Alternative stream ordering systems have been developed by Shreve[8][9] and Hodgkinson et al.[3] A statistical comparison of Strahler and Shreve systems, together with an analysis of stream/link lengths, is given by Smart.[10] Other hierarchical systems The Strahler numbering may be applied in the statistical analysis of any hierarchical system, not just to rivers. • Arenas et al. (2004) describe an application of the Horton–Strahler index in the analysis of social networks. • Ehrenfeucht, Rozenberg & Vermeir (1981) applied a variant of Strahler numbering (starting with zero at the leaves instead of one), which they called tree-rank, to the analysis of L-systems. • Strahler numbering has also been applied to biological hierarchies such as the branching structures of trees[11] and of animal respiratory and circulatory systems.[12] Register allocation When translating a high-level programming language to assembly language the minimum number of registers required to evaluate an expression tree is exactly its Strahler number. In this context, the Strahler number may also be called the register number.[13] For expression trees that require more registers than are available, the Sethi–Ullman algorithm may be used to translate an expression tree into a sequence of machine instructions that uses the registers as efficiently as possible, minimizing the number of times intermediate values are spilled from registers to main memory and the total number of instructions in the resulting compiled code. Related parameters Bifurcation ratio Associated with the Strahler numbers of a tree are bifurcation ratios, numbers describing how close to balanced a tree is. For each order i in a hierarchy, the ith bifurcation ratio is ${\frac {n_{i}}{n_{i+1}}}$ where ni denotes the number of nodes with order i. The bifurcation ratio of an overall hierarchy may be taken by averaging the bifurcation ratios at different orders. In a complete binary tree, the bifurcation ratio will be 2, while other trees will have larger bifurcation ratios. It is a dimensionless number. Pathwidth The pathwidth of an arbitrary undirected graph G may be defined as the smallest number w such that there exists an interval graph H containing G as a subgraph, with the largest clique in H having w + 1 vertices. For trees (viewed as undirected graphs by forgetting their orientation and root) the pathwidth differs from the Strahler number, but is closely related to it: in a tree with pathwidth w and Strahler number s, these two numbers are related by the inequalities[14] w ≤ s ≤ 2w + 2. The ability to handle graphs with cycles and not just trees gives pathwidth extra versatility compared to the Strahler number. However, unlike the Strahler number, the pathwidth is defined only for the whole graph, and not separately for each node in the graph. See also • Main stem of a river, typically found by following the branch with the highest Strahler number • Pfafstetter Coding System Notes 1. Devroye & Kruszewski (1996). 2. Devroye and Kruszewski (1995, 1996). 3. Hodgkinson, J.H., McLoughlin, S. & Cox, M.E. 2006. The influence of structural grain on drainage in a metamorphic sub-catchment: Laceys Creek, southeast Queensland, Australia. Geomorphology, 81: 394–407. 4. Kirchner, J.W., 1993. Statistical inevitability of Horton Laws and the apparent randomness of stream channel networks. Geology 21, 591–594. 5. "Stream Order – The Classification of Streams and Rivers". Retrieved 2011-12-11. 6. Bogale, Alemsha (2021). "Morphometric analysis of a drainage basin using geographical information system in Gilgel Abay watershed, Lake Tana Basin, upper Blue Nile Basin, Ethiopia". Applied Water Science. 11 (7): 122. Bibcode:2021ApWS...11..122B. doi:10.1007/s13201-021-01447-9. S2CID 235630850. 7. Waugh (2002). 8. Shreve, R.L., 1966. Statistical law of stream numbers. Journal of Geology 74, 17–37. 9. Shreve, R.L., 1967. Infinite topologically random channel networks. Journal of Geology 75, 178–186. 10. Smart, J.S. 1968, Statistical properties of stream lengths, Water Resources Research, 4, No 5. 1001–1014 11. Borchert & Slade (1981) 12. Horsfield (1976). 13. Ershov (1958); Flajolet, Raoult & Vuillemin (1979). 14. Luttenberger & Schlund (2011), using a definition of the "dimension" of a tree that is one less than the Strahler number. References • Arenas, A.; Danon, L.; Díaz-Guilera, A.; Gleiser, P. M.; Guimerá, R. (2004), "Community analysis in social networks", European Physical Journal B, 38 (2): 373–380, arXiv:cond-mat/0312040, Bibcode:2004EPJB...38..373A, doi:10.1140/epjb/e2004-00130-1, S2CID 9764926. • Borchert, Rolf; Slade, Norman A. (1981), "Bifurcation ratios and the adaptive geometry of trees", Botanical Gazette, 142 (3): 394–401, doi:10.1086/337238, hdl:1808/9253, JSTOR 2474363, S2CID 84145477. • Devroye, Luc; Kruszewski, Paul (1995), "A note on the Horton–Strahler number for random trees", Information Processing Letters, 56 (2): 95–99, doi:10.1016/0020-0190(95)00114-R. • Devroye, L.; Kruszewski, P. (1996), "On the Horton–Strahler number for random tries", RAIRO Informatique Théorique et Applications, 30 (5): 443–456, doi:10.1051/ita/1996300504431, MR 1435732 • Ehrenfeucht, A.; Rozenberg, G.; Vermeir, D. (1981), "On ETOL systems with finite tree-rank", SIAM Journal on Computing, 10 (1): 40–58, doi:10.1137/0210004, MR 0605602. • Ershov, A. P. (1958), "On programming of arithmetic operations", Communications of the ACM, 1 (8): 3–6, doi:10.1145/368892.368907, S2CID 15986378. • Flajolet, P.; Raoult, J. C.; Vuillemin, J. (1979), "The number of registers required for evaluating arithmetic expressions", Theoretical Computer Science, 9 (1): 99–125, doi:10.1016/0304-3975(79)90009-4. • Gleyzer, A.; Denisyuk, M.; Rimmer, A.; Salingar, Y. (2004), "A fast recursive GIS algorithm for computing Strahler stream order in braided and nonbraided networks", Journal of the American Water Resources Association, 40 (4): 937–946, Bibcode:2004JAWRA..40..937G, doi:10.1111/j.1752-1688.2004.tb01057.x, S2CID 128399321. • Horsfield, Keith (1976), "Some mathematical properties of branching trees with application to the respiratory system", Bulletin of Mathematical Biology, 38 (3): 305–315, doi:10.1007/BF02459562, PMID 1268383, S2CID 189888885. • Horton, R. E. (1945), "Erosional development of streams and their drainage basins: hydro-physical approach to quantitative morphology", Geological Society of America Bulletin, 56 (3): 275–370, doi:10.1130/0016-7606(1945)56[275:EDOSAT]2.0.CO;2, S2CID 129509551. • Lanfear, K. J. (1990), "A fast algorithm for automatically computing Strahler stream order", Journal of the American Water Resources Association, 26 (6): 977–981, Bibcode:1990JAWRA..26..977L, doi:10.1111/j.1752-1688.1990.tb01432.x. • Luttenberger, Michael; Schlund, Maxmilian (2011), An extension of Parikh's theorem beyond idempotence, arXiv:1112.2864, Bibcode:2011arXiv1112.2864L • Strahler, A. N. (1952), "Hypsometric (area-altitude) analysis of erosional topology", Geological Society of America Bulletin, 63 (11): 1117–1142, doi:10.1130/0016-7606(1952)63[1117:HAAOET]2.0.CO;2. • Strahler, A. N. (1957), "Quantitative analysis of watershed geomorphology", Transactions of the American Geophysical Union, 38 (6): 913–920, Bibcode:1957TrAGU..38..913S, doi:10.1029/tr038i006p00913. • Waugh, David (2002), Geography, An Integrated Approach (3rd ed.), Nelson Thornes. River morphology Large-scale features • Alluvial plain • Drainage basin • Drainage system (geomorphology) • Estuary • Strahler number (stream order) • River valley • River delta • River sinuosity Alluvial rivers • Anabranch • Avulsion (river) • Bar (river morphology) • Braided river • Channel pattern • Cut bank • Floodplain • Meander • Meander cutoff • Mouth bar • Oxbow lake • Point bar • Riffle • Rapids • Riparian zone • River bifurcation • River channel migration • River mouth • Slip-off slope • Stream pool • Thalweg Bedrock river • Canyon • Knickpoint • Plunge pool Bedforms • Ait • Antidune • Dune • Current ripple Regional processes • Aggradation • Base level • Degradation (geology) • Erosion and tectonics • River rejuvenation Mechanics • Deposition (geology) • Water erosion • Exner equation • Hack's law • Helicoidal flow • Playfair's law • Sediment transport • List of rivers that have reversed direction • Category • Portal
Wikipedia
Straight-line program In mathematics, more specifically in computational algebra, a straight-line program (SLP) for a finite group G = ⟨S⟩ is a finite sequence L of elements of G such that every element of L either belongs to S, is the inverse of a preceding element, or the product of two preceding elements. An SLP L is said to compute a group element g ∈ G if g ∈ L, where g is encoded by a word in S and its inverses. Intuitively, an SLP computing some g ∈ G is an efficient way of storing g as a group word over S; observe that if g is constructed in i steps, the word length of g may be exponential in i, but the length of the corresponding SLP is linear in i. This has important applications in computational group theory, by using SLPs to efficiently encode group elements as words over a given generating set. Straight-line programs were introduced by Babai and Szemerédi in 1984[1] as a tool for studying the computational complexity of certain matrix group properties. Babai and Szemerédi prove that every element of a finite group G has an SLP of length O(log2|G|) in every generating set. An efficient solution to the constructive membership problem is crucial to many group-theoretic algorithms. It can be stated in terms of SLPs as follows. Given a finite group G = ⟨S⟩ and g ∈ G, find a straight-line program computing g over S. The constructive membership problem is often studied in the setting of black box groups. The elements are encoded by bit strings of a fixed length. Three oracles are provided for the group-theoretic functions of multiplication, inversion, and checking for equality with the identity. A black box algorithm is one which uses only these oracles. Hence, straight-line programs for black box groups are black box algorithms. Explicit straight-line programs are given for a wealth of finite simple groups in the online ATLAS of Finite Groups. Definition Informal definition Let G be a finite group and let S be a subset of G. A sequence L = (g1,…,gm) of elements of G is a straight-line program over S if each gi can be obtained by one of the following three rules: 1. gi ∈ S 2. gi = gj $\cdot $ gk for some j,k < i 3. gi = g−1 j for some j < i. The straight-line cost c(g|S) of an element g ∈ G is the length of a shortest straight-line program over S computing g. The cost is infinite if g is not in the subgroup generated by S. A straight-line program is similar to a derivation in predicate logic. The elements of S correspond to axioms and the group operations correspond to the rules of inference. Formal definition Let G be a finite group and let S be a subset of G. A straight-line program of length m over S computing some g ∈ G is a sequence of expressions (w1,…,wm) such that for each i, wi is a symbol for some element of S, or wi = (wj,-1) for some j < i, or wi = (wj,wk) for some j,k < i, such that wm takes upon the value g when evaluated in G in the obvious manner. The original definition appearing in [2] requires that G =⟨S⟩. The definition presented above is a common generalisation of this. From a computational perspective, the formal definition of a straight-line program has some advantages. Firstly, a sequence of abstract expressions requires less memory than terms over the generating set. Secondly, it allows straight-line programs to be constructed in one representation of G and evaluated in another. This is an important feature of some algorithms.[2] Examples The dihedral group D12 is the group of symmetries of a hexagon. It can be generated by a 60 degree rotation ρ and one reflection λ. The leftmost column of the following is a straight-line program for λρ3: 1. λ 2. ρ 3. ρ2 4. ρ3 5. λρ3 1. λ is a generator. 2. ρ is a generator. 3. Second rule: (2).(2) 4. Second rule: (3).(2) 5. Second rule: (1).(4) In S6, the group of permutations on six letters, we can take α=(1 2 3 4 5 6) and β=(1 2) as generators. The leftmost column here is an example of a straight-line program to compute (1 2 3)(4 5 6): 1. α 2. β 3. α2 4. α2β 5. α2βα 6. α2βαβ 7. α2βαβα2βαβ 1. (1 2 3 4 5 6) 2. (1 2) 3. (1 3 5)(2 4 6) 4. (1 3 5 2 4 6) 5. (1 4)(2 5 3 6) 6. (1 4 2 5 3 6) 7. (1 2 3)(4 5 6) 1. α is a generator 2. β is a generator 3. Second rule: (1).(1) 4. Second rule: (3).(2) 5. Second rule: (4).(1) 6. Second rule: (5).(2) 7. Second rule: (6).(6) Applications Short descriptions of finite groups. Straight-line programs can be used to study compression of finite groups via first-order logic. They provide a tool to construct "short" sentences describing G (i.e. much shorter than |G|). In more detail, SLPs are used to prove that every finite simple group has a first-order description of length O(log|G|), and every finite group G has a first-order description of length O(log3|G|).[3] Straight-line programs computing generating sets for maximal subgroups of finite simple groups. The online ATLAS of Finite Group Representations[4] provides abstract straight-line programs for computing generating sets of maximal subgroups for many finite simple groups. Example: The group Sz(32), belonging to the infinite family of Suzuki groups, has rank 2 via generators a and b, where a has order 2, b has order 4, ab has order 5, ab2 has order 25 and abab2ab3 has order 25. The following is a straight-line program that computes a generating set for a maximal subgroup E32·E32⋊C31. This straight-line program can be found in the online ATLAS of Finite Group Representations. 1. a 2. b 3. ab 4. abb 5. ababb 6. ababbb 7. (abb)18 8. (abb)−18 9. (abb)−18b 10. (abb)−18b(abb)18 11. (ababb)14 12. (ababb)−14 13. (ababb)−14ababbb 14. (ababb)−14ababbb(ababb)14 1. a is a generator. 2. b is a generator. 3. Second rule: (1).(2) 4. Second rule: (3).(2) 5. Second rule: (3).(4) 6. Second rule: (5).(2) 7. Second rule iterated: (4) multiplied 18 times 8. Third rule: (7) inverse 9. Second rule: (8).(2) 10. Second rule: (9).(7) 11. Second rule iterated: (5) multiplied 14 times 12. Third rule: (11) inverse 13. Second rule: (12).(6) 14. Second rule: (13).(11) Reachability theorem The reachability theorem states that, given a finite group G generated by S, each g ∈ G has a maximum cost of (1 + lg|G|)2. This can be understood as a bound on how hard it is to generate a group element from the generators. Here the function lg(x) is an integer-valued version of the logarithm function: for k≥1 let lg(k) = max{r : 2r ≤ k}. The idea of the proof is to construct a set Z = {z1,…,zs} that will work as a new generating set (s will be defined during the process). It is usually larger than S, but any element of G can be expressed as a word of length at most 2|Z| over Z. The set Z is constructed by inductively defining an increasing sequence of sets K(i). Let K(i) = {z1α1·z2α2·…·ziαi : αj ∈ {0,1}}, where zi is the group element added to Z at the i-th step. Let c(i) denote the length of a shortest straight-line program that contains Z(i) = {z1,…,zi}. Let K(0) = {1G} and c(0)=0. We define the set Z recursively: • If K(i)−1K(i) = G, declare s to take upon the value i and stop. • Else, choose some zi+1 ∈ G\K(i)−1K(i) (which is non-empty) that minimises the "cost increase" c(i+1) − c(i). By this process, Z is defined in a way so that any g ∈ G can be written as an element of K(i)−1K(i), effectively making it easier to generate from Z. We now need to verify the following claim to ensure that the process terminates within lg(|G|) many steps: Claim 1 — If i < s then |K(i+1)| = 2|K(i)|. Proof It is immediate that |K(i+1)| ≤ 2|K(i)|. Now suppose for a contradiction that |K(i+1)| < 2|K(i)|. By the pigeonhole principle there are k1,k2 ∈ K(i+1) with k1 = z1α1·z2α2·…·zi+1αi+1 = z1β1·z2β2·…·zi+1βi+1 = k2 for some αj,βj ∈ {0,1}. Let r be the largest integer such that αr ≠ βr. Assume WLOG that αr = 1. It follows that zr = zp−αp·zp-1−αp-1·…·z1−α1·z1β1·z2β2·…·zqβq, with p,q < r. Hence zr ∈ K(r−1)−1K(r − 1), a contradiction. The next claim is used to show that the cost of every group element is within the required bound. Claim 2 —  c(i) ≤ i 2 − i. Proof Since c(0)=0 it suffices to show that c(i+1) - c(i) ≤ 2i. The Cayley graph of G is connected and if i < s, K(i)−1K(i) ≠ G, then there is an element of the form g1·g2 ∈ G \ K(i)−1K(i) with g1 ∈ K(i)−1K(i) and g2 ∈ S. It takes at most 2i steps to generate g1 ∈ K(i)−1K(i). There is no point in generating the element of maximum length, since it is the identity. Hence 2i −1 steps suffice. To generate g1·g2 ∈ G\K(i)−1K(i), 2i steps are sufficient. We now finish the theorem. Since K(s)−1K(s) = G, any g ∈ G can be written in the form k−1 1 ·k2 with k−1 1 ,k2 ∈ K(s). By Corollary 2, we need at most s2 − s steps to generate Z(s) = Z, and no more than 2s − 1 steps to generate g from Z(s). Therefore c(g|S) ≤ s2 +  s − 1 ≤ lg2|G|  + lg|G| − 1 ≤ (1 + lg|G|)2. References 1. Babai, László, and Endre Szemerédi. "On the complexity of matrix group problems I." Foundations of Computer Science, 1984. 25th Annual Symposium on Foundations of Computer Science. IEEE, 1984 2. Ákos Seress. (2003). Permutation Group Algorithms. [Online]. Cambridge Tracts in Mathematics. (No. 152). Cambridge: Cambridge University Press. 3. Nies, André; Tent, Katrin (2017). "Describing finite groups by short first-order sentences". Israel Journal of Mathematics. 221: 85–115. arXiv:1409.8390. doi:10.1007/s11856-017-1563-2. 4. "ATLAS of Finite Group Representations - V3".
Wikipedia
Straight skeleton In geometry, a straight skeleton is a method of representing a polygon by a topological skeleton. It is similar in some ways to the medial axis but differs in that the skeleton is composed of straight line segments, while the medial axis of a polygon may involve parabolic curves. However, both are homotopy-equivalent to the underlying polygon.[1] Straight skeletons were first defined for simple polygons by Aichholzer et al. (1995),[2] and generalized to planar straight-line graphs (PSLG) by Aichholzer & Aurenhammer (1996).[3] In their interpretation as projection of roof surfaces, they are already extensively discussed by G. A. Peschka (1877).[4] Definition The straight skeleton of a polygon is defined by a continuous shrinking process in which the edges of the polygon are moved inwards parallel to themselves at a constant speed. As the edges move in this way, the vertices where pairs of edges meet also move, at speeds that depend on the angle of the vertex. If one of these moving vertices collides with a nonadjacent edge, the polygon is split in two by the collision, and the process continues in each part. The straight skeleton is the set of curves traced out by the moving vertices in this process. In the illustration the top figure shows the shrinking process and the middle figure depicts the straight skeleton in blue. Algorithms The straight skeleton may be computed by simulating the shrinking process by which it is defined; a number of variant algorithms for computing it have been proposed, differing in the assumptions they make on the input and in the data structures they use for detecting combinatorial changes in the input polygon as it shrinks. The following algorithms consider an input that forms a polygon, a polygon with holes, or a PSLG. For a polygonal input we denote the number of vertices by n and the number of reflex (concave, i.e., angle greater than π) vertices by r. If the input is a PSLG then we consider the initial wavefront structure, which forms a set of polygons, and again denote by n the number of vertices and by r the number of reflex vertices w.r.t. the propagation direction. Most of the algorithms listed here are designed and analyzed in the real RAM model of computation. • Aichholzer et al.[2][3] showed how to compute straight skeletons of PSLGs in time O(n3 log n), or more precisely time O((n2+f) log n), where n is the number of vertices of the input polygon and f is the number of flip events during the construction. The best known bound for f is O(n3). • An algorithm with a worst case running time in O(nr log n), or simply O(n2 log n), is given by Huber and Held (2010, 2011), who argue that their approach is likely to run in near-linear time for many inputs.[5][6] • Petr Felkel and Štěpán Obdržálek designed an algorithm for simple polygons that is said to have an efficiency of O(nr + n log r).[7][8] However, it has been shown that their algorithm is incorrect.[9][10] • By using data structures for the bichromatic closest pair problem, Eppstein and Erickson showed how to construct straight skeleton problems using a linear number of closest pair data structure updates. A closest pair data structure based on quadtrees provides an O(nr + n log n) time algorithm, or a significantly more complicated data structure leads to the better asymptotic time bound O(n1 + ε + n8/11 + εr9/11 + ε), or more simply O(n17/11 + ε), where ε is any constant greater than zero.[11] This remains the best worst-case time bound known for straight skeleton construction with unrestricted inputs, but is complicated and has not been implemented. • For simple polygons in general position, the problem of straight skeleton construction is easier. Cheng, Mencel, and Vigneron showed how to compute the straight skeleton of simple polygons in time O(n log n log r + r4/3 + ε).[12] In the worst case, r may be on the order of n, in which case this time bound may be simplified to O(n4/3+ε). If the vertices of the input polygon have O(log n)-bit rational coordinates, their algorithm can be improved to run in O(n log n) time, even if the input polygon is not in general position. • A monotone polygon with respect to a line L is a polygon with the property that every line orthogonal to L intersects the polygon in a single interval. When the input is a monotone polygon, its straight skeleton can be constructed in time O(n log2 n).[13] Applications Each point within the input polygon can be lifted into three-dimensional space by using the time at which the shrinking process reaches that point as the z-coordinate of the point. The resulting three-dimensional surface has constant height on the edges of the polygon, and rises at constant slope from them except for the points of the straight skeleton itself, where surface patches at different angles meet. In this way, the straight skeleton can be used as the set of ridge lines of a building roof, based on walls in the form of the initial polygon.[2][14] The bottom figure in the illustration depicts a surface formed from the straight skeleton in this way. Demaine, Demaine and Lubiw used the straight skeleton as part of a technique for folding a sheet of paper so that a given polygon can be cut from it with a single straight cut (the fold-and-cut theorem), and related origami design problems.[15] Barequet et al. use straight skeletons in an algorithm for finding a three-dimensional surface that interpolates between two given polygonal chains.[16] Tănase and Veltkamp propose to decompose concave polygons into unions of convex regions using straight skeletons, as a preprocessing step for shape matching in image processing.[17] Bagheri and Razzazi use straight skeletons to guide vertex placement in a graph drawing algorithm in which the graph drawing is constrained to lie inside a polygonal boundary.[18] The straight skeleton can also be used to construct an offset curve of a polygon, with mitered corners, analogously to the construction of an offset curve with rounded corners formed from the medial axis. Tomoeda and Sugihara apply this idea in the design of signage, visible from wide angles, with an illusory appearance of depth.[19] Similarly, Asente and Carr use straight skeletons to design color gradients that match letter outlines or other shapes.[20] As with other types of skeleton such as the medial axis, the straight skeleton can be used to collapse a two-dimensional area to a simplified one-dimensional representation of the area. For instance, Haunert and Sester describe an application of this type for straight skeletons in geographic information systems, in finding the centerlines of roads.[21][22] Every tree with no degree-two vertices can be realized as the straight skeleton of a convex polygon.[23] The convex hull of the roof shape corresponding to this straight skeleton forms a Steinitz realization of the Halin graph formed from the tree by connecting its leaves in a cycle. Higher dimensions Barequet et al. defined a version of straight skeletons for three-dimensional polyhedra, described algorithms for computing it, and analyzed its complexity on several different types of polyhedron.[24] Huber et al. investigated metric spaces under which the corresponding Voronoi diagrams and straight skeletons coincide. For two dimensions, the characterization of such metric spaces is complete. For higher dimensions, this method can be interpreted as a generalization of straight skeletons of certain input shapes to arbitrary dimensions by means of Voronoi diagrams.[25] References 1. Huber, Stefan (2018), "The Topology of Skeletons and Offsets" (PDF), Proceedings of the 34th European Workshop on Computational Geometry (EuroCG'18). 2. Aichholzer, Oswin; Aurenhammer, Franz; Alberts, David; Gärtner, Bernd (1995), "A novel type of skeleton for polygons", Journal of Universal Computer Science, 1 (12): 752–761, doi:10.1007/978-3-642-80350-5_65, MR 1392429. 3. Aichholzer, Oswin; Aurenhammer, Franz (1996), "Straight skeletons for general polygonal figures in the plane", Proc. 2nd Ann. Int. Conf. Computing and Combinatorics (COCOON '96), Lecture Notes in Computer Science, vol. 1090, Springer-Verlag, pp. 117–126 4. Peschka, Gustav A. (1877), Kotirte Ebenen: Kotirte Projektionen und deren Anwendung; Vorträge, Brünn: Buschak & Irrgang, doi:10.14463/GBV:865177619. 5. Huber, Stefan; Held, Martin (2010), "Computing straight skeletons of planar straight-line graphs based on motorcycle graphs" (PDF), Proceedings of the 22nd Canadian Conference on Computational Geometry. 6. Huber, Stefan; Held, Martin (2011), "Theoretical and practical results on straight skeletons of planar straight-line graphs" (PDF), Proceedings of the Twenty-Seventh Annual Symposium on Computational Geometry (SCG'11), June 13–15, 2011, Paris, France, pp. 171–178. 7. "CenterLineReplacer", FME Transformers, Safe Software, retrieved 2013-08-05. 8. Felkel, Petr; Obdržálek, Štěpán (1998), "Straight skeleton implementation", SCCG 98: Proceedings of the 14th Spring Conference on Computer Graphics, pp. 210–218. 9. Huber, Stefan (2012), Computing Straight Skeletons and Motorcycle Graphs: Theory and Practice, Shaker Verlag, ISBN 978-3-8440-0938-5. 10. Yakersberg, Evgeny (2004), Morphing Between Geometric Shapes Using Straight-Skeleton-Based Interpolation., Israel Institute of Technology. 11. Eppstein, David; Erickson, Jeff (1999), "Raising roofs, crashing cycles, and playing pool: applications of a data structure for finding pairwise interactions", Discrete and Computational Geometry, 22 (4): 569–592, doi:10.1007/PL00009479, MR 1721026, S2CID 12460625. 12. Cheng, Siu-Wing; Mencel, Liam; Vigneron, Antoine (2016), "A faster algorithm for computing straight skeletons", ACM Transactions on Algorithms, 12 (3): 44:1–44:21, arXiv:1405.4691. 13. Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter (February 2015). "A Simple Algorithm for Computing Positively Weighted Straight Skeletons of Monotone Polygons" (PDF). Information Processing Letters. 115 (2): 243–247. doi:10.1016/j.ipl.2014.09.021. PMC 4308025. PMID 25648376. As Biedl et al. point out, an earlier algorithm for monotone polygons by Das et al. is incorrect as described, and at best works only for inputs in general position that do not have vertex-vertex events: Das, Gautam K.; Mukhopadhyay, Asish; Nandy, Subhas C.; Patil, Sangameswar; Rao, S. V. (2010), "Computing the straight skeletons of a monotone polygon in O(n log n) time" (PDF), Proceedings of the 22nd Canadian Conference on Computational Geometry. 14. Bélanger, David (2000), Designing Roofs of Buildings. 15. Demaine, Erik D.; Demaine, Martin L.; Lubiw, Anna (1998), "Folding and cutting paper", Revised Papers from the Japan Conference on Discrete and Computational Geometry (JCDCG'98), Lecture Notes in Computer Science, vol. 1763, Springer-Verlag, pp. 104–117, doi:10.1007/b75044, ISBN 978-3-540-67181-7, S2CID 32962663. 16. Barequet, Gill; Goodrich, Michael T.; Levi-Steiner, Aya; Steiner, Dvir (2003), "Straight-skeleton based contour interpolation", Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 119–127. 17. Tănase, Mirela; Veltkamp, Remco C. (2003), "Polygon decomposition based on the straight line skeleton", Proceedings of the 19th Annual ACM Symposium on Computational Geometry, pp. 58–67, doi:10.1145/777792.777802, S2CID 18173658. 18. Bagheri, Alireza; Razzazi, Mohammadreza (2004), "Drawing free trees inside simple polygons using polygon skeleton", Computing and Informatics, 23 (3): 239–254, MR 2165282. 19. Tomoeda, Akiyasu; Sugihara, Kokichi (2012), "Computational creation of a new illusionary solid sign", Ninth International Symposium on Voronoi Diagrams in Science and Engineering (ISVD 2012), pp. 144–147, doi:10.1109/ISVD.2012.26, S2CID 27610348. 20. Asente, Paul; Carr, Nathan (2013), "Creating contour gradients using 3D bevels", Proceedings of the Symposium on Computational Aesthetics (CAE '13, Anaheim, California), New York, NY, USA: ACM, pp. 63–66, doi:10.1145/2487276.2487283, ISBN 978-1-4503-2203-4, S2CID 17302186. 21. Haunert, Jan-Henrik; Sester, Monika (2008), "Area collapse and road centerlines based on straight skeletons", GeoInformatica, 12 (2): 169–191, doi:10.1007/s10707-007-0028-x, S2CID 2169666. 22. Raleigh, David Baring (2008), Straight Skeleton Survey Adjustment Of Road Centerlines From Gps Coarse Acquisition Data: A Case Study In Bolivia, Ohio State University, Geodetic Science and Surveying. 23. Aichholzer, Oswin; Cheng, Howard; Devadoss, Satyan L.; Hackl, Thomas; Huber, Stefan; Li, Brian; Risteski, Andrej (2012), "What makes a tree a straight skeleton?" (PDF), Proceedings of the 24th Canadian Conference on Computational Geometry (CCCG'12). 24. Barequet, Gill; Eppstein, David; Goodrich, Michael T.; Vaxman, Amir (2008), "Straight skeletons of three-dimensional polyhedra", Proc. 16th European Symposium on Algorithms, Lecture Notes in Computer Science, vol. 5193, Springer-Verlag, pp. 148–160, arXiv:0805.0022, doi:10.1007/978-3-540-87744-8_13. 25. Huber, Stefan; Aichholzer, Oswin; Hackl, Thomas; Vogtenhuber, Birgit (2014), "Straight skeletons by means of Voronoi diagrams under polyhedral distance functions" (PDF), Proc. 26th Canadian Conference on Computational Geometry (CCCG'14). External links • Erickson, Jeff. "Straight Skeleton of a Simple Polygon". • 2D Straight Skeleton in CGAL, the Computational Geometry Algorithms Library • Straight Skeleton for polygon with holes Straight Skeleton builder implemented in java. • Amit Parnerkar, Sarnath Ramnath. "Engineering an efficient algorithm for finding the straight skeleton of a simple polygon in O(n log n)". • STALGO: "STALGO is an industrial-strength C++ software package for computing straight skeletons and mitered offset-curves." by Stefan Huber.
Wikipedia
Straightening theorem for vector fields In differential calculus, the domain-straightening theorem states that, given a vector field $X$ on a manifold, there exist local coordinates $y_{1},\dots ,y_{n}$ such that $X=\partial /\partial y_{1}$ in a neighborhood of a point where $X$ is nonzero. The theorem is also known as straightening out of a vector field. The Frobenius theorem in differential geometry can be considered as a higher-dimensional generalization of this theorem. Proof It is clear that we only have to find such coordinates at 0 in $\mathbb {R} ^{n}$. First we write $X=\sum _{j}f_{j}(x){\partial \over \partial x_{j}}$ where $x$ is some coordinate system at $0$. Let $f=(f_{1},\dots ,f_{n})$. By linear change of coordinates, we can assume $f(0)=(1,0,\dots ,0).$ Let $\Phi (t,p)$ be the solution of the initial value problem ${\dot {x}}=f(x),x(0)=p$ and let $\psi (x_{1},\dots ,x_{n})=\Phi (x_{1},(0,x_{2},\dots ,x_{n})).$ $\Phi $ (and thus $\psi $) is smooth by smooth dependence on initial conditions in ordinary differential equations. It follows that ${\partial \over \partial x_{1}}\psi (x)=f(\psi (x))$, and, since $\psi (0,x_{2},\dots ,x_{n})=\Phi (0,(0,x_{2},\dots ,x_{n}))=(0,x_{2},\dots ,x_{n})$, the differential $d\psi $ is the identity at $0$. Thus, $y=\psi ^{-1}(x)$ is a coordinate system at $0$. Finally, since $x=\psi (y)$, we have: ${\partial x_{j} \over \partial y_{1}}=f_{j}(\psi (y))=f_{j}(x)$ and so ${\partial \over \partial y_{1}}=X$ as required. References • Theorem B.7 in Camille Laurent-Gengoux, Anne Pichereau, Pol Vanhaecke. Poisson Structures, Springer, 2013.
Wikipedia
Strang splitting Strang splitting is a numerical method for solving differential equations that are decomposable into a sum of differential operators. It is named after Gilbert Strang. It is used to speed up calculation for problems involving operators on very different time scales, for example, chemical reactions in fluid dynamics, and to solve multidimensional partial differential equations by reducing them to a sum of one-dimensional problems. Fractional step methods As a precursor to Strang splitting, consider a differential equation of the form ${\frac {d{y}}{dt}}=L_{1}({y})+L_{2}({y})$ where $L_{1}$, $L_{2}$ are differential operators. If $L_{1}$ and $L_{2}$ were constant coefficient matrices, then the exact solution to the associated initial value problem would be $y(t)=e^{(L_{1}+L_{2})t}y_{0}$. If $L_{1}$ and $L_{2}$ commute, then by the exponential laws this is equivalent to $y(t)=e^{L_{1}t}e^{L_{2}t}y_{0}$. If they do not, then by the Baker–Campbell–Hausdorff formula it is still possible to replace the exponential of the sum by a product of exponentials at the cost of a first order error: $e^{(L_{1}+L_{2})t}y_{0}=e^{L_{1}t}e^{L_{2}t}y_{0}+{\mathcal {O}}(t)$. This gives rise to a numerical scheme where one, instead of solving the original initial problem, solves both subproblems alternating: ${\tilde {y}}_{1}=e^{L_{1}\Delta t}y_{0}$ $y_{1}=e^{L_{2}\Delta t}{\tilde {y}}_{1}$ ${\tilde {y}}_{2}=e^{L_{1}\Delta t}y_{1}$ $y_{2}=e^{L_{2}\Delta t}{\tilde {y}}_{2}$ etc. In this context, $e^{L_{1}\Delta t}$ is a numerical scheme solving the subproblem ${\frac {d{y}}{dt}}=L_{1}({y})$ to first order. The approach is not restricted to linear problems, that is, $L_{1}$ can be any differential operator. Strang splitting Strang splitting extends this approach to second order by choosing another order of operations. Instead of taking full time steps with each operator, instead, one performs time steps as follows: ${\tilde {y}}_{1}=e^{L_{1}{\frac {\Delta t}{2}}}y_{0}$ ${\bar {y}}_{1}=e^{L_{2}\Delta t}{\tilde {y}}_{1}$ $y_{1}=e^{L_{1}{\frac {\Delta t}{2}}}{\bar {y}}_{1}$ ${\tilde {y}}_{2}=e^{L_{1}{\frac {\Delta t}{2}}}y_{1}$ ${\bar {y}}_{2}=e^{L_{2}\Delta t}{\tilde {y}}_{2}$ $y_{2}=e^{L_{1}{\frac {\Delta t}{2}}}{\bar {y}}_{2}$ etc. One can prove that Strang splitting is second order by using either the Baker-Campbell-Hausdorff formula, Rooted tree analysis or a direct comparison of the error terms using Taylor expansion. For the scheme to be second order accurate, $e^{\cdots }$ must be a second order approximation to the solution operator as well. See also • List of operator splitting topics • Matrix splitting References • Strang, Gilbert. On the construction and comparison of difference schemes. SIAM Journal on Numerical Analysis 5.3 (1968): 506–517. doi:10.1137/0705041 • McLachlan, Robert I., and G. Reinout W. Quispel. Splitting methods. Acta Numerica 11 (2002): 341–434. doi:10.1017/S0962492902000053 • LeVeque, Randall J., Finite volume methods for hyperbolic problems. Vol. 31. Cambridge University Press, 2002. (pbk ISBN 0-521-00924-3)
Wikipedia
Strange nonchaotic attractor In mathematics, a strange nonchaotic attractor (SNA) is a form of attractor which, while converging to a limit, is strange, because it is not piecewise differentiable, and also non-chaotic, in that its Lyapunov exponents are non-positive.[1] SNAs were introduced as a topic of study by Grebogi et al. in 1984.[1][2] SNAs can be distinguished from periodic, quasiperiodic and chaotic attractors using the 0-1 test for chaos.[3] Periodically driven damped nonlinear systems can exhibit complex dynamics characterized by strange chaotic attractors, where strange refers to the fractal geometry of the attractor and chaotic refers to the exponential sensitivity of orbits on the attractor. Quasiperiodically driven systems forced by incommensurate frequencies are natural extensions of periodically driven ones and are phenomenologically richer. In addition to periodic or quasiperiodic motion, they can exhibit chaotic or nonchaotic motion on strange attractors. Although quasiperiodic forcing is not necessary for strange nonchaotic dynamics (e.g., the period doubling accumulation point of a period doubling cascade), if quasiperiodic driving is not present, strange nonchaotic attractors are typically not robust and not expected to occur naturally because they exist only when the system is carefully tuned to a precise critical parameter value. On the other hand, it was shown in the paper of Grebogi et al. that SNAs can be robust when the system is quasiperiodically driven. The first experiment to demonstrate a robust strange nonchaotic attractor involved the buckling of a magnetoelastic ribbon driven quasiperiodically by two incommensurate frequencies in the golden ratio.[4] Strange nonchaotic attractors have been robustly observed in laboratory experiments involving magnetoelastic ribbons, electrochemical cells, electronic circuits, and a neon glow discharge. In 2015, strange nonchaotic dynamics were identified for the pulsating RR Lyrae variable KIC 5520878, as well as three similar stars observed by the Kepler space telescope, which oscillate in two frequency modes that are nearly in the golden ratio.[5][6][7][8] References 1. Lluís Alsedà (March 8, 2007). "On the definition of Strange Nonchaotic Attractor" (PDF). Retrieved 2014-05-07. 2. Grebogi, Celso; Ott, Edward; Pelikan, Steven; Yorke, James A. (1984). "Strange attractors that are not chaotic". Physica D: Nonlinear Phenomena. Elsevier BV. 13 (1–2): 261–268. doi:10.1016/0167-2789(84)90282-3. ISSN 0167-2789. 3. Gopal, R.; Venkatesan, A.; Lakshmanan, M. (2013). "Applicability of 0-1 Test for Strange Nonchaotic Attractors". Chaos: An Interdisciplinary Journal of Nonlinear Science. 23 (2): 023123. arXiv:1303.0169. Bibcode:2013Chaos..23b3123G. doi:10.1063/1.4808254. PMID 23822488. 4. Ditto, W. L.; Spano, M. L.; Savage, H. T.; Rauseo, S. N.; Heagy, J.; Ott, E. (1990-07-30). "Experimental observation of a strange nonchaotic attractor". Physical Review Letters. American Physical Society (APS). 65 (5): 533–536. doi:10.1103/physrevlett.65.533. ISSN 0031-9007. 5. Lindner, John F.; Kohar, Vivek; Kia, Behnam; Hippke, Michael; Learned, John G.; Ditto, William L. (2015-02-03). "Strange Nonchaotic Stars". Physical Review Letters. American Physical Society (APS). 114 (5): 054101. arXiv:1501.01747. doi:10.1103/physrevlett.114.054101. ISSN 0031-9007. 6. "Applied Chaos Laboratory". appliedchaoslab.phys.hawaii.edu. 7. Clara Moskowitz (2015-02-09). "Strange Stars Pulsate According to the Golden Ratio". Scientific American. Retrieved 2020-01-11. 8. Lindner, John F.; Kohar, Vivek; Kia, Behnam; Hippke, Michael; Learned, John G.; Ditto, William L. (2015). "Stars That Act Irrational". Physical Review Letters. 114 (5): 054101. arXiv:1501.01747. Bibcode:2015PhRvL.114e4101L. doi:10.1103/PhysRevLett.114.054101. PMID 25699444.
Wikipedia
Strangulated graph In graph theoretic mathematics, a strangulated graph is a graph in which deleting the edges of any induced cycle of length greater than three would disconnect the remaining graph. That is, they are the graphs in which every peripheral cycle is a triangle. Examples In a maximal planar graph, or more generally in every polyhedral graph, the peripheral cycles are exactly the faces of a planar embedding of the graph, so a polyhedral graph is strangulated if and only if all the faces are triangles, or equivalently it is maximal planar. Every chordal graph is strangulated, because the only induced cycles in chordal graphs are triangles, so there are no longer cycles to delete. Characterization A clique-sum of two graphs is formed by identifying together two equal-sized cliques in each graph, and then possibly deleting some of the clique edges. For the version of clique-sums relevant to strangulated graphs, the edge deletion step is omitted. A clique-sum of this type between two strangulated graphs results in another strangulated graph, for every long induced cycle in the sum must be confined to one side or the other (otherwise it would have a chord between the vertices at which it crossed from one side of the sum to the other), and the disconnected parts of that side formed by deleting the cycle must remain disconnected in the clique-sum. Every chordal graph can be decomposed in this way into a clique-sum of complete graphs, and every maximal planar graph can be decomposed into a clique-sum of 4-vertex-connected maximal planar graphs. As Seymour & Weaver (1984) show, these are the only possible building blocks of strangulated graphs: the strangulated graphs are exactly the graphs that can be formed as clique-sums of complete graphs and maximal planar graphs. See also • Line perfect graph, a graph in which every odd cycle is a triangle References • Seymour, P. D.; Weaver, R. W. (1984), "A generalization of chordal graphs", Journal of Graph Theory, 8 (2): 241–251, doi:10.1002/jgt.3190080206, MR 0742878.
Wikipedia
Strassen algorithm In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist. Strassen's algorithm works for any ring, such as plus/multiply, but not all semirings, such as min-plus or boolean algebra, where the naive algorithm still works, and so called combinatorial matrix multiplication. History Volker Strassen first published this algorithm in 1969 and thereby proved that the $n^{3}$ general matrix multiplication algorithm was not optimal.[1] The Strassen algorithm's publication resulted in more research about matrix multiplication that led to both asymptotically lower bounds and improved computational upper bounds. Algorithm Let $A$, $B$ be two square matrices over a ring ${\mathcal {R}}$, for example matrices whose entries are integers or the real numbers. The goal of matrix multiplication is to calculate the matrix product $C=AB$. The following exposition of the algorithm assumes that all of these matrices have sizes that are powers of two (i.e., $A,\,B,\,C\in \operatorname {Matr} _{2^{n}\times 2^{n}}({\mathcal {R}})$), but this is only conceptually necessary -- if the matrices $A$, $B$ are not of type $2^{n}\times 2^{n}$, the "missing" rows and columns can be filled with zeros to obtain matrices with sizes of powers of two -- though real implementations of the algorithm do not do this in practice. The Strassen algorithm partitions $A$, $B$ and $C$ into equally sized block matrices $A={\begin{bmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}},\quad B={\begin{bmatrix}B_{11}&B_{12}\\B_{21}&B_{22}\end{bmatrix}},\quad C={\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}},\quad $ with $A_{ij},B_{ij},C_{ij}\in \operatorname {Mat} _{2^{n-1}\times 2^{n-1}}({\mathcal {R}})$. The naive algorithm would be: ${\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}}={\begin{bmatrix}A_{11}B_{11}+A_{12}B_{21}&A_{11}B_{12}+A_{12}B_{22}\\A_{21}B_{11}+A_{22}B_{21}&A_{21}B_{12}+A_{22}B_{22}\end{bmatrix}}.$ This construction does not reduce the number of multiplications: 8 multiplications of matrix blocks are still needed to calculate the $C_{ij}$ matrices, the same number of multiplications needed when using standard matrix multiplication. The Strassen algorithm defines instead new matrices: ${\begin{aligned}M_{1}&=(A_{11}+A_{22})(B_{11}+B_{22});\\M_{2}&=(A_{21}+A_{22})B_{11};\\M_{3}&=A_{11}(B_{12}-B_{22});\\M_{4}&=A_{22}(B_{21}-B_{11});\\M_{5}&=(A_{11}+A_{12})B_{22};\\M_{6}&=(A_{21}-A_{11})(B_{11}+B_{12});\\M_{7}&=(A_{12}-A_{22})(B_{21}+B_{22}),\\\end{aligned}}$ using only 7 multiplications (one for each $M_{k}$) instead of 8. We may now express the $C_{ij}$ in terms of $M_{k}$: ${\begin{bmatrix}C_{11}&C_{12}\\C_{21}&C_{22}\end{bmatrix}}={\begin{bmatrix}M_{1}+M_{4}-M_{5}+M_{7}&M_{3}+M_{5}\\M_{2}+M_{4}&M_{1}-M_{2}+M_{3}+M_{6}\end{bmatrix}}.$ We recursively iterate this division process until the submatrices degenerate into numbers (elements of the ring ${\mathcal {R}}$). If, as mentioned above, the original matrix had a size that was not a power of 2, then the resulting product will have zero rows and columns just like $A$ and $B$, and these will then be stripped at this point to obtain the (smaller) matrix $C$ we really wanted. Practical implementations of Strassen's algorithm switch to standard methods of matrix multiplication for small enough submatrices, for which those algorithms are more efficient. The particular crossover point for which Strassen's algorithm is more efficient depends on the specific implementation and hardware. Earlier authors had estimated that Strassen's algorithm is faster for matrices with widths from 32 to 128 for optimized implementations.[2] However, it has been observed that this crossover point has been increasing in recent years, and a 2010 study found that even a single step of Strassen's algorithm is often not beneficial on current architectures, compared to a highly optimized traditional multiplication, until matrix sizes exceed 1000 or more, and even for matrix sizes of several thousand the benefit is typically marginal at best (around 10% or less).[3] A more recent study (2016) observed benefits for matrices as small as 512 and a benefit around 20%.[4] Winograd form It is possible to reduce the number of matrix additions by instead using the following form discovered by Winograd: ${\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}A&C\\B&D\end{bmatrix}}={\begin{bmatrix}aA+bB&w+v+(a+b-c-d)D\\w+u+d(B+C-A-D)&w+u+v\end{bmatrix}}$ where u = (c - a)(C - D), v = (c + d)(C - A), w = aA + (c + d - a)(A + D - C). This reduces the number of matrix additions and subtractions from 18 to 15. The number of matrix multiplications is still 7, and the asymptotic complexity is the same.[5] Asymptotic complexity The outline of the algorithm above showed that one can get away with just 7, instead of the traditional 8, matrix-matrix multiplications for the sub-blocks of the matrix. On the other hand, one has to do additions and subtractions of blocks, though this is of no concern for the overall complexity: Adding matrices of size $N/2$ requires only $(N/2)^{2}$ operations whereas multiplication is substantially more expensive (traditionally $2(N/2)^{3}$ addition or multiplication operations). The question then is how many operations exactly one needs for Strassen's algorithms, and how this compares with the standard matrix multiplication that takes approximately $2N^{3}$ (where $N=2^{n}$) arithmetic operations, i.e. an asymptotic complexity $\Theta (N^{3})$. The number of additions and multiplications required in the Strassen algorithm can be calculated as follows: let $f(n)$ be the number of operations for a $2^{n}\times 2^{n}$ matrix. Then by recursive application of the Strassen algorithm, we see that $f(n)=7f(n-1)+l4^{n}$, for some constant $l$ that depends on the number of additions performed at each application of the algorithm. Hence $f(n)=(7+o(1))^{n}$, i.e., the asymptotic complexity for multiplying matrices of size $N=2^{n}$ using the Strassen algorithm is $O([7+o(1)]^{n})=O(N^{\log _{2}7+o(1)})\approx O(N^{2.8074})$. The reduction in the number of arithmetic operations however comes at the price of a somewhat reduced numerical stability,[6] and the algorithm also requires significantly more memory compared to the naive algorithm. Both initial matrices must have their dimensions expanded to the next power of 2, which results in storing up to four times as many elements, and the seven auxiliary matrices each contain a quarter of the elements in the expanded ones. Strassen's algorithm needs to be compared to the "naive" way of doing the matrix multiplication that would require 8 instead of 7 multiplications of sub-blocks. This would then give rise to the complexity one expects from the standard approach: $O(8^{\log _{2}n})=O(N^{\log _{2}8})=O(N^{3})$. The comparison of these two algorithms shows that asymptotically, Strassen's algorithm is faster: There exists a size $N_{\text{threshold}}$ so that matrices that are larger are more efficiently multiplied with Strassen's algorithm than the "traditional" way. However, the asymptotic statement does not imply that Strassen's algorithm is always faster even for small matrices, and in practice this is in fact not the case: For small matrices, the cost of the additional additions of matrix blocks outweighs the savings in the number of multiplications. There are also other factors not captured by the analysis above, such as the difference in cost on today's hardware between loading data from memory onto processors vs. the cost of actually doing operations on this data. As a consequence of these sorts of considerations, Strassen's algorithm is typically only used on "large" matrices. This kind of effect is even more pronounced with alternative algorithms such as the one by Coppersmith and Winograd: While asymptotically even faster, the cross-over point $N_{\text{threshold}}$ is so large that the algorithm is not generally used on matrices one encounters in practice. Rank or bilinear complexity The bilinear complexity or rank of a bilinear map is an important concept in the asymptotic complexity of matrix multiplication. The rank of a bilinear map $\phi :\mathbf {A} \times \mathbf {B} \rightarrow \mathbf {C} $ :\mathbf {A} \times \mathbf {B} \rightarrow \mathbf {C} } over a field F is defined as (somewhat of an abuse of notation) $R(\phi /\mathbf {F} )=\min \left\{r\left|\exists f_{i}\in \mathbf {A} ^{*},g_{i}\in \mathbf {B} ^{*},w_{i}\in \mathbf {C} ,\forall \mathbf {a} \in \mathbf {A} ,\mathbf {b} \in \mathbf {B} ,\phi (\mathbf {a} ,\mathbf {b} )=\sum _{i=1}^{r}f_{i}(\mathbf {a} )g_{i}(\mathbf {b} )w_{i}\right.\right\}$ In other words, the rank of a bilinear map is the length of its shortest bilinear computation.[7] The existence of Strassen's algorithm shows that the rank of $2\times 2$ matrix multiplication is no more than seven. To see this, let us express this algorithm (alongside the standard algorithm) as such a bilinear computation. In the case of matrices, the dual spaces A* and B* consist of maps into the field F induced by a scalar double-dot product, (i.e. in this case the sum of all the entries of a Hadamard product.) Standard algorithmStrassen algorithm $i$$f_{i}(\mathbf {a} )$$g_{i}(\mathbf {b} )$$w_{i}$$f_{i}(\mathbf {a} )$$g_{i}(\mathbf {b} )$$w_{i}$ 1 ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}$ ${\begin{bmatrix}1&0\\0&1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}1&0\\0&1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}1&0\\0&1\end{bmatrix}}$ 2 ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}$ ${\begin{bmatrix}0&0\\1&1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\1&-1\end{bmatrix}}$ 3 ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}$ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&1\\0&-1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&1\\0&1\end{bmatrix}}$ 4 ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}$ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}-1&0\\1&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}1&0\\1&0\end{bmatrix}}$ 5 ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}$ ${\begin{bmatrix}1&1\\0&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}-1&1\\0&0\end{bmatrix}}$ 6 ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}$ ${\begin{bmatrix}-1&0\\1&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}1&1\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}$ 7 ${\begin{bmatrix}0&0\\1&0\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&1\\0&0\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}$ ${\begin{bmatrix}0&1\\0&-1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\1&1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}1&0\\0&0\end{bmatrix}}$ 8 ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {a} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}:\mathbf {b} $ ${\begin{bmatrix}0&0\\0&1\end{bmatrix}}$ $\mathbf {a} \mathbf {b} =\sum _{i=1}^{8}f_{i}(\mathbf {a} )g_{i}(\mathbf {b} )w_{i}$ $\mathbf {a} \mathbf {b} =\sum _{i=1}^{7}f_{i}(\mathbf {a} )g_{i}(\mathbf {b} )w_{i}$ It can be shown that the total number of elementary multiplications $L$ required for matrix multiplication is tightly asymptotically bound to the rank $R$, i.e. $L=\Theta (R)$, or more specifically, since the constants are known, $R/2\leq L\leq R$. One useful property of the rank is that it is submultiplicative for tensor products, and this enables one to show that $2^{n}\times 2^{n}\times 2^{n}$ matrix multiplication can be accomplished with no more than $7n$ elementary multiplications for any $n$. (This $n$-fold tensor product of the $2\times 2\times 2$ matrix multiplication map with itself — an $n$-th tensor power—is realized by the recursive step in the algorithm shown.) Cache behavior Strassen's algorithm is cache oblivious. Analysis of its cache behavior algorithm has shown it to incur $\Theta \left(1+{\frac {n^{2}}{b}}+{\frac {n^{\log _{2}7}}{b{\sqrt {M}}}}\right)$ cache misses during its execution, assuming an idealized cache of size $M$ (i.e. with $M/b$ lines of length $b$).[8]: 13  Implementation considerations The description above states that the matrices are square, and the size is a power of two, and that padding should be used if needed. This restriction allows the matrices to be split in half, recursively, until limit of scalar multiplication is reached. The restriction simplifies the explanation, and analysis of complexity, but is not actually necessary;[9] and in fact, padding the matrix as described will increase the computation time and can easily eliminate the fairly narrow time savings obtained by using the method in the first place. A good implementation will observe the following: • It is not necessary or desirable to use the Strassen algorithm down to the limit of scalars. Compared to conventional matrix multiplication, the algorithm adds a considerable $O(n^{2})$ workload in addition/subtractions; so below a certain size, it will be better to use conventional multiplication. Thus, for instance, a $1600\times 1600$ does not need to be padded to $2048\times 2048$, since it could be subdivided down to $25\times 25$ matrices and conventional multiplication can then be used at that level. • The method can indeed be applied to square matrices of any dimension.[3] If the dimension is even, they are split in half as described. If the dimension is odd, zero padding by one row and one column is applied first. Such padding can be applied on-the-fly and lazily, and the extra rows and columns discarded as the result is formed. For instance, suppose the matrices are $199\times 199$. They can be split so that the upper-left portion is $100\times 100$ and the lower-right is $99\times 99$. Wherever the operations require it, dimensions of $99$ are zero padded to $100$ first. Note, for instance, that the product $M_{2}$ is only used in the lower row of the output, so is only required to be $99$ rows high; and thus the left factor $A_{21}+A_{22}$ used to generate it need only be $99$ rows high; accordingly, there is no need to pad that sum to $100$ rows; it is only necessary to pad $A_{22}$ to $100$ columns to match $A_{21}$. Furthermore, there is no need for the matrices to be square. Non-square matrices can be split in half using the same methods, yielding smaller non-square matrices. If the matrices are sufficiently non-square it will be worthwhile reducing the initial operation to more square products, using simple methods which are essentially $O(n^{2})$, for instance: • A product of size $[2N\times N]\ast [N\times 10N]$ can be done as 20 separate $[N\times N]\ast [N\times N]$ operations, arranged to form the result; • A product of size $[N\times 10N]\ast [10N\times N]$ can be done as 10 separate $[N\times N]\ast [N\times N]$ operations, summed to form the result. These techniques will make the implementation more complicated, compared to simply padding to a power-of-two square; however, it is a reasonable assumption that anyone undertaking an implementation of Strassen, rather than conventional multiplication, will place a higher priority on computational efficiency than on simplicity of the implementation. In practice, Strassen's algorithm can be implemented to attain better performance than conventional multiplication even for matrices as small as $500\times 500$, for matrices that are not at all square, and without requiring workspace beyond buffers that are already needed for a high-performance conventional multiplication.[4] See also • Computational complexity of mathematical operations • Gauss–Jordan elimination • Coppersmith–Winograd algorithm • Z-order matrix representation • Karatsuba algorithm, for multiplying n-digit integers in $O(n^{\log _{2}3})$ instead of in $O(n^{2})$ time • A similar complex multiplication algorithm multiplies two complex numbers using 3 real multiplications instead of 4 • Toom-Cook algorithm, a faster generalization of the Karatsuba algorithm that permits recursive divide-and-conquer decomposition into more than 2 blocks at a time References 1. Strassen, Volker (1969). "Gaussian Elimination is not Optimal". Numer. Math. 13 (4): 354–356. doi:10.1007/BF02165411. S2CID 121656251. 2. Skiena, Steven S. (1998), "§8.2.3 Matrix multiplication", The Algorithm Design Manual, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94860-7. 3. D'Alberto, Paolo; Nicolau, Alexandru (2005). Using Recursion to Boost ATLAS's Performance (PDF). Sixth Int'l Symp. on High Performance Computing. 4. Huang, Jianyu; Smith, Tyler M.; Henry, Greg M.; van de Geijn, Robert A. (13 Nov 2016). Strassen's Algorithm Reloaded. SC16: The International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE Press. pp. 690–701. doi:10.1109/SC.2016.58. ISBN 9781467388153. Retrieved 1 Nov 2022. 5. Knuth (1997), p. 500. 6. Webb, Miller (1975). "Computational complexity and numerical stability". SIAM J. Comput. 4 (2): 97–107. doi:10.1137/0204009. 7. Burgisser; Clausen; Shokrollahi (1997). Algebraic Complexity Theory. Springer-Verlag. ISBN 3-540-60582-7. 8. Frigo, M.; Leiserson, C. E.; Prokop, H.; Ramachandran, S. (1999). Cache-oblivious algorithms (PDF). Proc. IEEE Symp. on Foundations of Computer Science (FOCS). pp. 285–297. 9. Higham, Nicholas J. (1990). "Exploiting fast matrix multiplication within the level 3 BLAS" (PDF). ACM Transactions on Mathematical Software. 16 (4): 352–368. doi:10.1145/98267.98290. hdl:1813/6900. S2CID 5715053. • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 28: Section 28.2: Strassen's algorithm for matrix multiplication, pp. 735–741. • Knuth, Donald (1997). The Art of Computer Programming, Seminumerical Algorithms. Vol. II (3rd ed.). Addison-Wesley. ISBN 0-201-89684-2. External links • Weisstein, Eric W. "Strassen's Formulas". MathWorld. (also includes formulas for fast matrix inversion) • Tyler J. Earnest, Strassen's Algorithm on the Cell Broadband Engine Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software
Wikipedia
Strassmann's theorem In mathematics, Strassmann's theorem is a result in field theory. It states that, for suitable fields, suitable formal power series with coefficients in the valuation ring of the field have only finitely many zeroes. History It was introduced by Reinhold Straßmann (1928). Statement of the theorem Let K be a field with a non-Archimedean absolute value | · | and let R be the valuation ring of K. Let f(x) be a formal power series with coefficients in R other than the zero series, with coefficients an converging to zero with respect to | · |. Then f(x) has only finitely many zeroes in R. More precisely, the number of zeros is at most N, where N is the largest index with |aN| = max |an|. As a corollary, there is no analogue of Euler's identity, e2πi = 1, in Cp, the field of p-adic complex numbers. See also • p-adic exponential function References • Murty, M. Ram (2002). Introduction to P-Adic Analytic Number Theory. American Mathematical Society. p. 35. ISBN 978-0-8218-3262-2. • Straßmann, Reinhold (1928), "Über den Wertevorrat von Potenzreihen im Gebiet der p-adischen Zahlen.", Journal für die reine und angewandte Mathematik (in German), 1928 (159): 13–28, doi:10.1515/crll.1928.159.13, ISSN 0075-4102, JFM 54.0162.06, S2CID 117410014 External links • Weisstein, Eric W. "Strassman's Theorem". MathWorld.
Wikipedia
Strategy (game theory) In game theory, a player's strategy is any of the options which they choose in a setting where the outcome depends not only on their own actions but on the actions of others.[1] The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.[2] A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines. The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by a player at some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A strategy on the other hand is a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the game. It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B doesn't enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff. This also helps players to identify Nash equilibrium which are discussed in more detail below. A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player. Strategy set A player's strategy set defines what strategies are available for them to play. A strategy profile is a list of strategy sets, ordered from most to least desirable. A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, a game of rock paper scissors comprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}. A strategy set is infinite otherwise. For instance the cake cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}. In a dynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. For instance, in the ultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject. In a Bayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information. Choosing a strategy set In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution. For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}. Pure and mixed strategies A pure strategy provides a complete definition of how a player will play a game. Pure strategy can be thought about as a singular concrete plan subject to the observations they make during the course of the game of play. In particular, it determines the move a player will make for any situation they could face. A player's strategy set is the set of pure strategies available to that player. A mixed strategy is an assignment of a probability to each pure strategy. When enlisting mixed strategy, it is often because the game doesn't allow for a rational description in specifying a pure strategy for the game. This allows for a player to randomly select a pure strategy. (See the following section for an illustration.) Since probabilities are continuous, there are infinitely many mixed strategies available to a player. Since probabilities are being assigned to strategies for a specific player when discussing the payoffs of certain scenarios the payoff must be referred to as "expected payoff". Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0. A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect equilibrium.) Mixed strategy Illustration In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002).[3] It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie). Goalie Lean LeftLean Right KickerKick Left 0,  0+2, -2 Kick Right+1, -1 0,  0  Payoff for the Soccer Game (Kicker, Goalie) This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1. The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3). Note that in equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also note that in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability. Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies. Significance In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2. Interpretations of mixed strategies During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability.[4] [5] Game theorist Ariel Rubinstein describes alternative ways of understanding the concept. The first, due to Harsanyi (1973),[6] is called purification, and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors.[5] A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents. Later, Aumann and Brandenburger (1995),[7] re-interpreted Nash equilibrium as an equilibrium in beliefs, rather than actions. For instance, in rock paper scissors an equilibrium in beliefs would have each player believing the other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy. Behavior strategy While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree, while a behavior strategy can be seen as a stochastic path. The relationship between mixed and behavior strategies is the subject of Kuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true. A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997) with their Absent-Minded Driver game. Outcome equivalence Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy.[8] Strategy with perfect recall Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in the Absent-Minded Driver game formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game. Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in the Absent-minded Driver game. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games.[9] See also • Nash equilibrium • Haven (graph theory) • Evolutionarily stable strategy References 1. Ben Polak Game Theory: Lecture 1 Transcript ECON 159, 5 September 2007, Open Yale Courses. 2. Aumann, R. (22 March 2017). Game Theory. In: Palgrave Macmillan. London: Palgrave Macmillan. ISBN 978-1-349-95121-5. 3. Chiappori, P. -A.; Levitt, S.; Groseclose, T. (2002). "Testing Mixed-Strategy Equilibria when Players Are Heterogeneous: The Case of Penalty Kicks in Soccer" (PDF). American Economic Review. 92 (4): 1138. CiteSeerX 10.1.1.178.1646. doi:10.1257/00028280260344678. 4. Aumann, R. (1985). "What is Game Theory Trying to accomplish?" (PDF). In Arrow, K.; Honkapohja, S. (eds.). Frontiers of Economics. Oxford: Basil Blackwell. pp. 909–924. 5. Rubinstein, A. (1991). "Comments on the interpretation of Game Theory". Econometrica. 59 (4): 909–924. doi:10.2307/2938166. JSTOR 2938166. 6. Harsanyi, John (1973). "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points". Int. J. Game Theory. 2: 1–23. doi:10.1007/BF01737554. S2CID 154484458. 7. Aumann, Robert; Brandenburger, Adam (1995). "Epistemic Conditions for Nash Equilibrium". Econometrica. 63 (5): 1161–1180. CiteSeerX 10.1.1.122.5816. doi:10.2307/2171725. JSTOR 2171725. 8. Shimoji, Makoto (2012-05-01). "Outcome-equivalence of self-confirming equilibrium and Nash equilibrium". Games and Economic Behavior. 75 (1): 441–447. doi:10.1016/j.geb.2011.09.010. ISSN 0899-8256. 9. Kak, Subhash (2017). "The Absent-Minded Driver Problem Redux". arXiv:1702.05778 [cs.AI]. Topics in game theory Definitions • Congestion game • Cooperative game • Determinacy • Escalation of commitment • Extensive-form game • First-player and second-player win • Game complexity • Graphical game • Hierarchy of beliefs • Information set • Normal-form game • Preference • Sequential game • Simultaneous game • Simultaneous action selection • Solved game • Succinct game Equilibrium concepts • Bayesian Nash equilibrium • Berge equilibrium • Core • Correlated equilibrium • Epsilon-equilibrium • Evolutionarily stable strategy • Gibbs equilibrium • Mertens-stable equilibrium • Markov perfect equilibrium • Nash equilibrium • Pareto efficiency • Perfect Bayesian equilibrium • Proper equilibrium • Quantal response equilibrium • Quasi-perfect equilibrium • Risk dominance • Satisfaction equilibrium • Self-confirming equilibrium • Sequential equilibrium • Shapley value • Strong Nash equilibrium • Subgame perfection • Trembling hand Strategies • Backward induction • Bid shading • Collusion • Forward induction • Grim trigger • Markov strategy • Dominant strategies • Pure strategy • Mixed strategy • Strategy-stealing argument • Tit for tat Classes of games • Bargaining problem • Cheap talk • Global game • Intransitive game • Mean-field game • Mechanism design • n-player game • Perfect information • Large Poisson game • Potential game • Repeated game • Screening game • Signaling game • Stackelberg competition • Strictly determined game • Stochastic game • Symmetric game • Zero-sum game Games • Go • Chess • Infinite chess • Checkers • Tic-tac-toe • Prisoner's dilemma • Gift-exchange game • Optional prisoner's dilemma • Traveler's dilemma • Coordination game • Chicken • Centipede game • Lewis signaling game • Volunteer's dilemma • Dollar auction • Battle of the sexes • Stag hunt • Matching pennies • Ultimatum game • Rock paper scissors • Pirate game • Dictator game • Public goods game • Blotto game • War of attrition • El Farol Bar problem • Fair division • Fair cake-cutting • Cournot game • Deadlock • Diner's dilemma • Guess 2/3 of the average • Kuhn poker • Nash bargaining game • Induction puzzles • Trust game • Princess and monster game • Rendezvous problem Theorems • Arrow's impossibility theorem • Aumann's agreement theorem • Folk theorem • Minimax theorem • Nash's theorem • Negamax theorem • Purification theorem • Revelation principle • Sprague–Grundy theorem • Zermelo's theorem Key figures • Albert W. Tucker • Amos Tversky • Antoine Augustin Cournot • Ariel Rubinstein • Claude Shannon • Daniel Kahneman • David K. Levine • David M. Kreps • Donald B. Gillies • Drew Fudenberg • Eric Maskin • Harold W. Kuhn • Herbert Simon • Hervé Moulin • John Conway • Jean Tirole • Jean-François Mertens • Jennifer Tour Chayes • John Harsanyi • John Maynard Smith • John Nash • John von Neumann • Kenneth Arrow • Kenneth Binmore • Leonid Hurwicz • Lloyd Shapley • Melvin Dresher • Merrill M. Flood • Olga Bondareva • Oskar Morgenstern • Paul Milgrom • Peyton Young • Reinhard Selten • Robert Axelrod • Robert Aumann • Robert B. Wilson • Roger Myerson • Samuel Bowles • Suzanne Scotchmer • Thomas Schelling • William Vickrey Miscellaneous • All-pay auction • Alpha–beta pruning • Bertrand paradox • Bounded rationality • Combinatorial game theory • Confrontation analysis • Coopetition • Evolutionary game theory • First-move advantage in chess • Game Description Language • Game mechanics • Glossary of game theory • List of game theorists • List of games in game theory • No-win situation • Solving chess • Topological game • Tragedy of the commons • Tyranny of small decisions Authority control: National • Germany • Czech Republic
Wikipedia
Stratified Morse theory In mathematics, stratified Morse theory is an analogue to Morse theory for general stratified spaces, originally developed by Mark Goresky and Robert MacPherson. The main point of the theory is to consider functions $f:M\to \mathbb {R} $ and consider how the stratified space $f^{-1}(-\infty ,c]$ changes as the real number $c\in \mathbb {R} $ changes. Morse theory of stratified spaces has uses everywhere from pure mathematics topics such as braid groups and representations to robot motion planning and potential theory. A popular application in pure mathematics is Morse theory on manifolds with boundary, and manifolds with corners. See also • Digital Morse theory • Discrete Morse theory • Level-set method References • Goresky, M.; MacPherson, R. (2012) [1988]. Stratified Morse theory. Springer. ISBN 978-3-642-71714-7. DJVU file on Goresky's page • Handron, D. (2002). "Generalized billiard paths and Morse theory on manifolds with corners" (PDF). Topology and Its Applications. 126 (1): 83–118. doi:10.1016/S0166-8641(02)00036-6. • Vakhrameev, S.A. (2000). "Morse lemmas for smooth functions on manifolds with corners". J Math Sci. 100 (4): 2428–45. doi:10.1007/s10958-000-0003-7. S2CID 116031490.
Wikipedia
Stratification (mathematics) Stratification has several usages in mathematics. In mathematical logic In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form $Q_{1}\wedge \dots \wedge Q_{n}\wedge \neg Q_{n+1}\wedge \dots \wedge \neg Q_{n+m}\rightarrow P$ is stratified if and only if there is a stratification assignment S that fulfills the following conditions: 1. If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short $S(P)\geq S(Q)$. 2. If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short $S(P)>S(Q)$. The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories. In a specific set theory In New Foundations (NF) and related set theories, a formula $\phi $ in the language of first-order logic with equality and membership is said to be stratified if and only if there is a function $\sigma $ which sends each variable appearing in $\phi $ (considered as an item of syntax) to a natural number (this works equally well if all integers are used) in such a way that any atomic formula $x\in y$ appearing in $\phi $ satisfies $\sigma (x)+1=\sigma (y)$ and any atomic formula $x=y$ appearing in $\phi $ satisfies $\sigma (x)=\sigma (y)$. It turns out that it is sufficient to require that these conditions be satisfied only when both variables in an atomic formula are bound in the set abstract $\{x\mid \phi \}$ under consideration. A set abstract satisfying this weaker condition is said to be weakly stratified. The stratification of New Foundations generalizes readily to languages with more predicates and with term constructions. Each primitive predicate needs to have specified required displacements between values of $\sigma $ at its (bound) arguments in a (weakly) stratified formula. In a language with term constructions, terms themselves need to be assigned values under $\sigma $, with fixed displacements from the values of each of their (bound) arguments in a (weakly) stratified formula. Defined term constructions are neatly handled by (possibly merely implicitly) using the theory of descriptions: a term $(\iota x.\phi )$ (the x such that $\phi $) must be assigned the same value under $\sigma $ as the variable x. A formula is stratified if and only if it is possible to assign types to all variables appearing in the formula in such a way that it will make sense in a version TST of the theory of types described in the New Foundations article, and this is probably the best way to understand the stratification of New Foundations in practice. The notion of stratification can be extended to the lambda calculus; this is found in papers of Randall Holmes. A motivation for the use of stratification is to address Russell's paradox, the antinomy considered to have undermined Frege's central work Grundgesetze der Arithmetik (1902). Quine, Willard Van Orman (1963) [1961]. From a Logical Point of View (2nd ed.). New York: Harper & Row. p. 90. LCCN 61-15277. In topology In singularity theory, there is a different meaning, of a decomposition of a topological space X into disjoint subsets each of which is a topological manifold (so that in particular a stratification defines a partition of the topological space). This is not a useful notion when unrestricted; but when the various strata are defined by some recognisable set of conditions (for example being locally closed), and fit together manageably, this idea is often applied in geometry. Hassler Whitney and René Thom first defined formal conditions for stratification. See Whitney stratification and topologically stratified space. In statistics See stratified sampling. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Stratifold In differential topology, a branch of mathematics, a stratifold is a generalization of a differentiable manifold where certain kinds of singularities are allowed. More specifically a stratifold is stratified into differentiable manifolds of (possibly) different dimensions. Stratifolds can be used to construct new homology theories. For example, they provide a new geometric model for ordinary homology. The concept of stratifolds was invented by Matthias Kreck. The basic idea is similar to that of a topologically stratified space, but adapted to differential topology. Definitions Before we come to stratifolds, we define a preliminary notion, which captures the minimal notion for a smooth structure on a space: A differential space (in the sense of Sikorski) is a pair $(X,C),$ where X is a topological space and C is a subalgebra of the continuous functions $X\to \mathbb {R} $ such that a function is in C if it is locally in C and $g\circ \left(f_{1},\ldots ,f_{n}\right):X\to \mathbb {R} $ is in C for $g:\mathbb {R} ^{n}\to \mathbb {R} $ smooth and $f_{i}\in C.$ A simple example takes for X a smooth manifold and for C just the smooth functions. For a general differential space $(X,C)$ and a point x in X we can define as in the case of manifolds a tangent space $T_{x}X$ as the vector space of all derivations of function germs at x. Define strata $X_{i}=\{x\in X:T_{x}X$ has dimension i$\}.$ For an n-dimensional manifold M we have that $M_{n}=M$ and all other strata are empty. We are now ready for the definition of a stratifold, where more than one stratum may be non-empty: A k-dimensional stratifold is a differential space $(S,C),$ where S is a locally compact Hausdorff space with countable base of topology. All skeleta should be closed. In addition we assume: 1. The $\left(S_{i},C|_{S_{i}}\right)$ are i-dimensional smooth manifolds. 2. For all x in S, restriction defines an isomorphism of stalks $C_{x}\to C^{\infty }(S_{i})_{x}.$ 3. All tangent spaces have dimension ≤ k. 4. For each x in S and every neighbourhood U of x, there exists a function $\rho :U\to \mathbb {R} $ with $\rho (x)\neq 0$ and ${\text{supp}}(\rho )\subset U$ (a bump function). A n-dimensional stratifold is called oriented if its (n − 1)-stratum is empty and its top stratum is oriented. One can also define stratifolds with boundary, the so-called c-stratifolds. One defines them as a pair $(T,\partial T)$ of topological spaces such that $T-\partial T$ is an n-dimensional stratifold and $\partial T$ is an (n − 1)-dimensional stratifold, together with an equivalence class of collars. An important subclass of stratifolds are the regular stratifolds, which can be roughly characterized as looking locally around a point in the i-stratum like the i-stratum times a (n − i)-dimensional stratifold. This is a condition which is fulfilled in most stratifold one usually encounters. Examples There are plenty of examples of stratifolds. The first example to consider is the open cone over a manifold M. We define a continuous function from S to the reals to be in C if and only if it is smooth on $M\times (0,1)$ and it is locally constant around the cone point. The last condition is automatic by point 2 in the definition of a stratifold. We can substitute M by a stratifold S in this construction. The cone is oriented if and only if S is oriented and not zero-dimensional. If we consider the (closed) cone with bottom, we get a stratifold with boundary S. Other examples for stratifolds are one-point compactifications and suspensions of manifolds, (real) algebraic varieties with only isolated singularities and (finite) simplicial complexes. Bordism theories In this section, we will assume all stratifolds to be regular. We call two maps $S,S'\to X$ from two oriented compact k-dimensional stratifolds into a space X bordant if there exists an oriented (k + 1)-dimensional compact stratifold T with boundary S + (−S') such that the map to X extends to T. The set of equivalence classes of such maps $S\to X$ is denoted by $SH_{k}X.$ The sets have actually the structure of abelian groups with disjoint union as addition. One can develop enough differential topology of stratifolds to show that these define a homology theory. Clearly, $SH_{k}({\text{point}})=0$ for $k>0$ since every oriented stratifold S is the boundary of its cone, which is oriented if $\dim(S)>0.$ One can show that $SH_{0}({\text{point}})\cong \mathbb {Z} .$ Hence, by the Eilenberg–Steenrod uniqueness theorem, $SH_{k}(X)\cong H_{k}(X)$ for every space X homotopy-equivalent to a CW-complex, where H denotes singular homology. For other spaces these two homology theories need not be isomorphic (an example is the one-point compactification of the surface of infinite genus). There is also a simple way to define equivariant homology with the help of stratifolds. Let G be a compact Lie group. We can then define a bordism theory of stratifolds mapping into a space X with a G-action just as above, only that we require all stratifolds to be equipped with an orientation-preserving free G-action and all maps to be G-equivariant. Denote by $SH_{k}^{G}(X)$ the bordism classes. One can prove $SH_{k}^{G}(X)\cong H_{k-\dim(G)}^{G}(X)$ for every X homotopy equivalent to a CW-complex. Connection to the theory of genera A genus is a ring homomorphism from a bordism ring into another ring. For example, the Euler characteristic defines a ring homomorphism $\Omega ^{O}({\text{point}})\to \mathbb {Z} /2[t]$ from the unoriented bordism ring and the signature defines a ring homomorphism $\Omega ^{SO}({\text{point}})\to \mathbb {Z} [t]$ from the oriented bordism ring. Here t has in the first case degree 1 and in the second case degree 4, since only manifolds in dimensions divisible by 4 can have non-zero signature. The left hand sides of these homomorphisms are homology theories evaluated at a point. With the help of stratifolds it is possible to construct homology theories such that the right hand sides are these homology theories evaluated at a point, the Euler homology and the Hirzebruch homology respectively. Umkehr maps Suppose, one has a closed embedding $i:N\hookrightarrow M$ of manifolds with oriented normal bundle. Then one can define an umkehr map $H_{k}(M)\to H_{k+\dim(N)-\dim(M)}(N).$ One possibility is to use stratifolds: represent a class $x\in H_{k}(M)$ by a stratifold $f:S\to M.$ Then make ƒ transversal to N. The intersection of S and N defines a new stratifold S' with a map to N, which represents a class in $H_{k+\dim(N)-\dim(M)}(N).$ It is possible to repeat this construction in the context of an embedding of Hilbert manifolds of finite codimension, which can be used in string topology. References • M. Kreck, Differential Algebraic Topology: From Stratifolds to Exotic Spheres, AMS (2010), ISBN 0-8218-4898-4 • The stratifold page • Euler homology Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory. This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich.[1][2][3][4] In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.[5] Kalman filtering[6] has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically.[7] Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control[8][9] and can be used for trajectory optimization.[10] Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters[11] provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.[12] The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear."[13] Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense. [14] It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.[15] Extensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion,[16] and distributed sensor networks to develop distributed or consensus Kalman filtering.[17] History The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele[18][19] and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements.[20] It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer.[21]: 16  This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961). The Apollo computer used 2k of magnetic core RAM and 36k wire rope [...]. The CPU was built from ICs [...]. Clock speed was under 100 kHz [...]. The fact that the MIT engineers were able to pack such good software (one of the very first applications of the Kalman filter) into such a tiny computer is truly remarkable. — Interview with Jack Crenshaw, by Matthew Reed, TRS-80.org (2009) Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station.[22] Overview of the calculation Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm. Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state. The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman-gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness. When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances. Example application As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate. For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping. Technical description and context The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models,[23][24] and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory. In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state. For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations. A wide variety of Kalman filters exists by now, from Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment. Underlying dynamic system model Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999)[25] and Hamilton (1994), Chapter 13.[26] In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step k, following: • Fk, the state-transition model; • Hk, the observation model; • Qk, the covariance of the process noise; • Rk, the covariance of the observation noise; • and sometimes Bk, the control-input model as described below; if Bk is included, then there is also • uk, the control vector, representing the controlling input into control-input model. The Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to $\mathbf {x} _{k}=\mathbf {F} _{k}\mathbf {x} _{k-1}+\mathbf {B} _{k}\mathbf {u} _{k}+\mathbf {w} _{k}$ where • Fk is the state transition model which is applied to the previous state xk−1; • Bk is the control-input model which is applied to the control vector uk; • wk is the process noise, which is assumed to be drawn from a zero mean multivariate normal distribution, ${\mathcal {N}}$, with covariance, Qk: $\mathbf {w} _{k}\sim {\mathcal {N}}\left(0,\mathbf {Q} _{k}\right)$. At time k an observation (or measurement) zk of the true state xk is made according to $\mathbf {z} _{k}=\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k}$ where • Hk is the observation model, which maps the true state space into the observed space and • vk is the observation noise, which is assumed to be zero mean Gaussian white noise with covariance Rk: $\mathbf {v} _{k}\sim {\mathcal {N}}\left(0,\mathbf {R} _{k}\right)$. The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1, ... ,vk} are all assumed to be mutually independent. Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.[27][28] Details The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation ${\hat {\mathbf {x} }}_{n\mid m}$ represents the estimate of $\mathbf {x} $ at time n given observations up to and including at time m ≤ n. The state of the filter is represented by two variables: • $\mathbf {x} _{k\mid k}$, the a posteriori state estimate mean at time k given observations up to and including at time k; • $\mathbf {P} _{k\mid k}$, the a posteriori estimate covariance matrix (a measure of the estimated accuracy of the state estimate). The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate. Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices Hk).[29][30] Predict Predicted (a priori) state estimate ${\hat {\mathbf {x} }}_{k\mid k-1}=\mathbf {F} _{k}\mathbf {x} _{k-1\mid k-1}+\mathbf {B} _{k}\mathbf {u} _{k}$ Predicted (a priori) estimate covariance ${\hat {\mathbf {P} }}_{k\mid k-1}=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k}$ Update Innovation or measurement pre-fit residual ${\tilde {\mathbf {y} }}_{k}=\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}$ Innovation (or pre-fit residual) covariance $\mathbf {S} _{k}=\mathbf {H} _{k}{\hat {\mathbf {P} }}_{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}$ Optimal Kalman gain $\mathbf {K} _{k}={\hat {\mathbf {P} }}_{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}$ Updated (a posteriori) state estimate $\mathbf {x} _{k\mid k}={\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}{\tilde {\mathbf {y} }}_{k}$ Updated (a posteriori) estimate covariance $\mathbf {P} _{k|k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right){\hat {\mathbf {P} }}_{k|k-1}$ Measurement post-fit residual ${\tilde {\mathbf {y} }}_{k\mid k}=\mathbf {z} _{k}-\mathbf {H} _{k}\mathbf {x} _{k\mid k}$ The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown. A more intuitive way to express the updated state estimate (${\hat {\mathbf {x} }}_{k\mid k}$) is: $\mathbf {x} _{k\mid k}=(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}){\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\mathbf {z} _{k}$ This expression reminds us of a linear interpolation, $x=(1-t)(a)+t(b)$ for $t$ between [0,1]. In our case: • $t$ is the Kalman gain ($\mathbf {K} _{k}$), a matrix that takes values from $0$ (high error in the sensor) to $I$ (low error). • $a$ is the value estimated from the model. • $b$ is the value from the measurement. This expression also resembles the alpha beta filter update step. Invariants If the model is accurate, and the values for ${\hat {\mathbf {x} }}_{0\mid 0}$ and $\mathbf {P} _{0\mid 0}$ accurately reflect the distribution of the initial state values, then the following invariants are preserved: ${\begin{aligned}\operatorname {E} [\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}]&=\operatorname {E} [\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}]=0\\\operatorname {E} [{\tilde {\mathbf {y} }}_{k}]&=0\end{aligned}}$ where $\operatorname {E} [\xi ]$ is the expected value of $\xi $. That is, all estimates have a mean error of zero. Also: ${\begin{aligned}\mathbf {P} _{k\mid k}&=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\\\mathbf {P} _{k\mid k-1}&=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\\\mathbf {S} _{k}&=\operatorname {cov} \left({\tilde {\mathbf {y} }}_{k}\right)\end{aligned}}$ so covariance matrices accurately reflect the covariance of estimates. Estimation of the noise covariances Qk and Rk Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances.[31][32] The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License.[33] Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed.[34] The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Optimality and performance It follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated) and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters.[35] Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. After the covariances are estimated, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose.[36] If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.[37][38] Example application, technical Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter. Since $\mathbf {F} ,\mathbf {H} ,\mathbf {R} ,\mathbf {Q} $ are constant, their time indices are dropped. The position and velocity of the truck are described by the linear state space $\mathbf {x} _{k}={\begin{bmatrix}x\\{\dot {x}}\end{bmatrix}}$ where ${\dot {x}}$ is the velocity, that is, the derivative of position with respect to time. We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that $\mathbf {x} _{k}=\mathbf {F} \mathbf {x} _{k-1}+\mathbf {G} a_{k}$ (there is no $\mathbf {B} u$ term since there are no known control inputs. Instead, ak is the effect of an unknown input and $\mathbf {G} $ applies that effect to the state vector) where ${\begin{aligned}\mathbf {F} &={\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}}\\[4pt]\mathbf {G} &={\begin{bmatrix}{\frac {1}{2}}{\Delta t}^{2}\\[6pt]\Delta t\end{bmatrix}}\end{aligned}}$ so that $\mathbf {x} _{k}=\mathbf {F} \mathbf {x} _{k-1}+\mathbf {w} _{k}$ where ${\begin{aligned}\mathbf {w} _{k}&\sim N(0,\mathbf {Q} )\\\mathbf {Q} &=\mathbf {G} \mathbf {G} ^{\textsf {T}}\sigma _{a}^{2}={\begin{bmatrix}{\frac {1}{4}}{\Delta t}^{4}&{\frac {1}{2}}{\Delta t}^{3}\\[6pt]{\frac {1}{2}}{\Delta t}^{3}&{\Delta t}^{2}\end{bmatrix}}\sigma _{a}^{2}.\end{aligned}}$ The matrix $\mathbf {Q} $ is not full rank (it is of rank one if $\Delta t\neq 0$). Hence, the distribution $N(0,\mathbf {Q} )$ is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by $\mathbf {w} _{k}\sim \mathbf {G} \cdot N\left(0,\sigma _{a}^{2}\right).$ At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also distributed normally, with mean 0 and standard deviation σz. $\mathbf {z} _{k}=\mathbf {Hx} _{k}+\mathbf {v} _{k}$ where $\mathbf {H} ={\begin{bmatrix}1&0\end{bmatrix}}$ and $\mathbf {R} =\mathrm {E} \left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]={\begin{bmatrix}\sigma _{z}^{2}\end{bmatrix}}$ We know the initial starting state of the truck with perfect precision, so we initialize ${\hat {\mathbf {x} }}_{0\mid 0}={\begin{bmatrix}0\\0\end{bmatrix}}$ and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix: $\mathbf {P} _{0\mid 0}={\begin{bmatrix}0&0\\0&0\end{bmatrix}}$ If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal: $\mathbf {P} _{0\mid 0}={\begin{bmatrix}\sigma _{x}^{2}&0\\0&\sigma _{\dot {x}}^{2}\end{bmatrix}}$ The filter will then prefer the information from the first measurements over the information already in the model. Asymptotic form For simplicity, assume that the control input $\mathbf {u} _{k}=\mathbf {0} $. Then the Kalman filter may be written: ${\hat {\mathbf {x} }}_{k\mid k}=\mathbf {F} _{k}{\hat {\mathbf {x} }}_{k-1\mid k-1}+\mathbf {K} _{k}[\mathbf {z} _{k}-\mathbf {H} _{k}\mathbf {F} _{k}{\hat {\mathbf {x} }}_{k-1\mid k-1}].$ A similar equation holds if we include a non-zero control input. Gain matrices $\mathbf {K} _{k}$ evolve independently of the measurements $\mathbf {z} _{k}$. From above, the four equations needed for updating the Kalman gain are as follows: ${\begin{aligned}\mathbf {P} _{k\mid k-1}&=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k},\\\mathbf {S} _{k}&=\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k},\\\mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1},\\\mathbf {P} _{k|k}&=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k|k-1}.\end{aligned}}$ Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices $\mathbf {K} _{k}$ to an asymptotic matrix $\mathbf {K} _{\infty }$ applies for conditions established in Walrand and Dimakis.[39] Simulations establish the number of steps to convergence. For the moving truck example described above, with $\Delta t=1$. and $\sigma _{a}^{2}=\sigma _{z}^{2}=\sigma _{x}^{2}=\sigma _{\dot {x}}^{2}=1$, simulation shows convergence in $10$ iterations. Using the asymptotic gain, and assuming $\mathbf {H} _{k}$ and $\mathbf {F} _{k}$ are independent of $k$, the Kalman filter becomes a linear time-invariant filter: ${\hat {\mathbf {x} }}_{k}=\mathbf {F} {\hat {\mathbf {x} }}_{k-1}+\mathbf {K} _{\infty }[\mathbf {z} _{k}-\mathbf {H} \mathbf {F} {\hat {\mathbf {x} }}_{k-1}].$ The asymptotic gain $\mathbf {K} _{\infty }$, if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance $\mathbf {P} _{\infty }$:[39] $\mathbf {P} _{\infty }=\mathbf {F} \left(\mathbf {P} _{\infty }-\mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}+\mathbf {R} \right)^{-1}\mathbf {H} \mathbf {P} _{\infty }\right)\mathbf {F} ^{\textsf {T}}+\mathbf {Q} .$ The asymptotic gain is then computed as before. $\mathbf {K} _{\infty }=\mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {R} +\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\right)^{-1}.$ Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by ${\hat {\mathbf {x} }}_{k+1}=\mathbf {F} {\hat {\mathbf {x} }}_{k}+\mathbf {B} \mathbf {u} _{k}+\mathbf {\overline {K}} _{\infty }[\mathbf {z} _{k}-\mathbf {H} {\hat {\mathbf {x} }}_{k}],}$ where ${\overline {\mathbf {K} }}_{\infty }=\mathbf {F} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\left(\mathbf {R} +\mathbf {H} \mathbf {P} _{\infty }\mathbf {H} ^{\textsf {T}}\right)^{-1}.$ This leads to an estimator of the form ${\hat {\mathbf {x} }}_{k+1}=(\mathbf {F} -{\overline {\mathbf {K} }}_{\infty }\mathbf {H} ){\hat {\mathbf {x} }}_{k}+\mathbf {B} \mathbf {u} _{k}+\mathbf {\overline {K}} _{\infty }\mathbf {z} _{k},}$ Derivations The Kalman filter can be derived as a generalized least squares method operating on previous data.[40] Deriving the posteriori estimate covariance matrix Starting with our invariant on the error covariance Pk | k as above $\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)$ substitute in the definition of ${\hat {\mathbf {x} }}_{k\mid k}$ $\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\mathbf {x} _{k}-\left({\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}{\tilde {\mathbf {y} }}_{k}\right)\right]$ and substitute ${\tilde {\mathbf {y} }}_{k}$ $\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-\left[{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]\right)$ and $\mathbf {z} _{k}$ $\mathbf {P} _{k\mid k}=\operatorname {cov} \left(\mathbf {x} _{k}-\left[{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]\right)$ and by collecting the error vectors we get $\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)-\mathbf {K} _{k}\mathbf {v} _{k}\right]$ Since the measurement error vk is uncorrelated with the other terms, this becomes $\mathbf {P} _{k\mid k}=\operatorname {cov} \left[\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\right]+\operatorname {cov} \left[\mathbf {K} _{k}\mathbf {v} _{k}\right]$ by the properties of vector covariance this becomes $\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\operatorname {cov} \left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\operatorname {cov} \left(\mathbf {v} _{k}\right)\mathbf {K} _{k}^{\textsf {T}}$ which, using our invariant on Pk | k−1 and the definition of Rk becomes $\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}\mathbf {K} _{k}^{\textsf {T}}$ This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below. Kalman gain derivation The Kalman filter is a minimum mean-square error estimator. The error in the a posteriori state estimation is $\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}$ We seek to minimize the expected value of the square of the magnitude of this vector, $\operatorname {E} \left[\left\|\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k|k}\right\|^{2}\right]$. This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix $\mathbf {P} _{k|k}$. By expanding out the terms in the equation above and collecting, we get: ${\begin{aligned}\mathbf {P} _{k\mid k}&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}\right)\mathbf {K} _{k}^{\textsf {T}}\\[6pt]&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}\end{aligned}}$ The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that ${\frac {\partial \;\operatorname {tr} (\mathbf {P} _{k\mid k})}{\partial \;\mathbf {K} _{k}}}=-2\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\right)^{\textsf {T}}+2\mathbf {K} _{k}\mathbf {S} _{k}=0.$ Solving this for Kk yields the Kalman gain: ${\begin{aligned}\mathbf {K} _{k}\mathbf {S} _{k}&=\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\right)^{\textsf {T}}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\\\Rightarrow \mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\end{aligned}}$ This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used. Simplification of the posteriori error covariance formula The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that $\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}$ Referring back to our expanded formula for the a posteriori error covariance, $\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\mathbf {K} _{k}^{\textsf {T}}+\mathbf {K} _{k}\mathbf {S} _{k}\mathbf {K} _{k}^{\textsf {T}}$ we find the last two terms cancel out, giving $\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}=(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k})\mathbf {P} _{k\mid k-1}$ This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used. Sensitivity analysis The Kalman filtering equations provide an estimate of the state ${\hat {\mathbf {x} }}_{k\mid k}$ and its error covariance $\mathbf {P} _{k\mid k}$ recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter.[41] In the absence of reliable statistics or the true values of noise covariance matrices $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$, the expression $\mathbf {P} _{k\mid k}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}\mathbf {K} _{k}^{\textsf {T}}$ no longer provides the actual error covariance. In other words, $\mathbf {P} _{k\mid k}\neq E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]$. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices $\mathbf {F} _{k}$ and $\mathbf {H} _{k}$ that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator. This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by $\mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}^{a}$ respectively, whereas the design values used in the estimator are $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$ respectively. The actual error covariance is denoted by $\mathbf {P} _{k\mid k}^{a}$ and $\mathbf {P} _{k\mid k}$ as computed by the Kalman filter is referred to as the Riccati variable. When $\mathbf {Q} _{k}\equiv \mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}\equiv \mathbf {R} _{k}^{a}$, this means that $\mathbf {P} _{k\mid k}=\mathbf {P} _{k\mid k}^{a}$. While computing the actual error covariance using $\mathbf {P} _{k\mid k}^{a}=E\left[\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)\left(\mathbf {x} _{k}-{\hat {\mathbf {x} }}_{k\mid k}\right)^{\textsf {T}}\right]$, substituting for ${\widehat {\mathbf {x} }}_{k\mid k}$ and using the fact that $E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}^{a}$ and $E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}^{a}$, results in the following recursive equations for $\mathbf {P} _{k\mid k}^{a}$ : $\mathbf {P} _{k\mid k-1}^{a}=\mathbf {F} _{k}\mathbf {P} _{k-1\mid k-1}^{a}\mathbf {F} _{k}^{\textsf {T}}+\mathbf {Q} _{k}^{a}$ and $\mathbf {P} _{k\mid k}^{a}=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}^{a}\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)^{\textsf {T}}+\mathbf {K} _{k}\mathbf {R} _{k}^{a}\mathbf {K} _{k}^{\textsf {T}}$ While computing $\mathbf {P} _{k\mid k}$, by design the filter implicitly assumes that $E\left[\mathbf {w} _{k}\mathbf {w} _{k}^{\textsf {T}}\right]=\mathbf {Q} _{k}$ and $E\left[\mathbf {v} _{k}\mathbf {v} _{k}^{\textsf {T}}\right]=\mathbf {R} _{k}$. The recursive expressions for $\mathbf {P} _{k\mid k}^{a}$ and $\mathbf {P} _{k\mid k}$ are identical except for the presence of $\mathbf {Q} _{k}^{a}$ and $\mathbf {R} _{k}^{a}$ in place of the design values $\mathbf {Q} _{k}$ and $\mathbf {R} _{k}$ respectively. Researches have been done to analyze Kalman filter system's robustness.[42] Square root form One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite. Positive definite matrices have the property that they have a triangular matrix square root P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix. Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions,[43]: 69  while on 21st-century computers they are only slightly more expensive.) Efficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.[43][44] The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter.[45] The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix.[46] Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in yk. The l·d·lt square-root filter requires orthogonalization of the observation vector.[44][45] This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).[47] Parallel form The Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä (2021).[48] The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU.[49] This reduces the computational complexity from $O(N)$ in the number of time steps to $O(\log(N))$. Relationship to recursive Bayesian estimation The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.[50] In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM). Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state. $p(\mathbf {x} _{k}\mid \mathbf {x} _{0},\dots ,\mathbf {x} _{k-1})=p(\mathbf {x} _{k}\mid \mathbf {x} _{k-1})$ Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state. $p(\mathbf {z} _{k}\mid \mathbf {x} _{0},\dots ,\mathbf {x} _{k})=p(\mathbf {z} _{k}\mid \mathbf {x} _{k})$ Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as: $p\left(\mathbf {x} _{0},\dots ,\mathbf {x} _{k},\mathbf {z} _{1},\dots ,\mathbf {z} _{k}\right)=p\left(\mathbf {x} _{0}\right)\prod _{i=1}^{k}p\left(\mathbf {z} _{i}\mid \mathbf {x} _{i}\right)p\left(\mathbf {x} _{i}\mid \mathbf {x} _{i-1}\right)$ However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set. This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible $x_{k-1}$. $p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)=\int p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)p\left(\mathbf {x} _{k-1}\mid \mathbf {Z} _{k-1}\right)\,d\mathbf {x} _{k-1}$ The measurement set up to time t is $\mathbf {Z} _{t}=\left\{\mathbf {z} _{1},\dots ,\mathbf {z} _{t}\right\}$ The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state. $p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k}\right)={\frac {p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)}{p\left(\mathbf {z} _{k}\mid \mathbf {Z} _{k-1}\right)}}$ The denominator $p\left(\mathbf {z} _{k}\mid \mathbf {Z} _{k-1}\right)=\int p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {Z} _{k-1}\right)\,d\mathbf {x} _{k}$ is a normalization term. The remaining probability density functions are ${\begin{aligned}p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)&={\mathcal {N}}\left(\mathbf {F} _{k}\mathbf {x} _{k-1},\mathbf {Q} _{k}\right)\\p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)&={\mathcal {N}}\left(\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right)\\p\left(\mathbf {x} _{k-1}\mid \mathbf {Z} _{k-1}\right)&={\mathcal {N}}\left({\hat {\mathbf {x} }}_{k-1},\mathbf {P} _{k-1}\right)\end{aligned}}$ The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for $\mathbf {x} _{k}$ given the measurements $\mathbf {Z} _{k}$ is the Kalman filter estimate. Marginal likelihood Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is 1. Sample a hidden state $\mathbf {x} _{0}$ from the Gaussian prior distribution $p\left(\mathbf {x} _{0}\right)={\mathcal {N}}\left({\hat {\mathbf {x} }}_{0\mid 0},\mathbf {P} _{0\mid 0}\right)$. 2. Sample an observation $\mathbf {z} _{0}$ from the observation model $p\left(\mathbf {z} _{0}\mid \mathbf {x} _{0}\right)={\mathcal {N}}\left(\mathbf {H} _{0}\mathbf {x} _{0},\mathbf {R} _{0}\right)$. 3. For $k=1,2,3,\ldots $, do 1. Sample the next hidden state $\mathbf {x} _{k}$ from the transition model $p\left(\mathbf {x} _{k}\mid \mathbf {x} _{k-1}\right)={\mathcal {N}}\left(\mathbf {F} _{k}\mathbf {x} _{k-1}+\mathbf {B} _{k}\mathbf {u} _{k},\mathbf {Q} _{k}\right).$ 2. Sample an observation $\mathbf {z} _{k}$ from the observation model $p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)={\mathcal {N}}\left(\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right).$ This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions. In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison. It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations, $p(\mathbf {z} )=\prod _{k=0}^{T}p\left(\mathbf {z} _{k}\mid \mathbf {z} _{k-1},\ldots ,\mathbf {z} _{0}\right)$, and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate ${\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}.$ Thus the marginal likelihood is given by ${\begin{aligned}p(\mathbf {z} )&=\prod _{k=0}^{T}\int p\left(\mathbf {z} _{k}\mid \mathbf {x} _{k}\right)p\left(\mathbf {x} _{k}\mid \mathbf {z} _{k-1},\ldots ,\mathbf {z} _{0}\right)d\mathbf {x} _{k}\\&=\prod _{k=0}^{T}\int {\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}\mathbf {x} _{k},\mathbf {R} _{k}\right){\mathcal {N}}\left(\mathbf {x} _{k};{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {P} _{k\mid k-1}\right)d\mathbf {x} _{k}\\&=\prod _{k=0}^{T}{\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {R} _{k}+\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\right)\\&=\prod _{k=0}^{T}{\mathcal {N}}\left(\mathbf {z} _{k};\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}\right),\end{aligned}}$ i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution $\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1},\mathbf {S} _{k}$. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood $\ell =\log p(\mathbf {z} )$ instead. Adopting the convention $\ell ^{(-1)}=0$, this can be done via the recursive update rule $\ell ^{(k)}=\ell ^{(k-1)}-{\frac {1}{2}}\left({\tilde {\mathbf {y} }}_{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}{\tilde {\mathbf {y} }}_{k}+\log \left|\mathbf {S} _{k}\right|+d_{y}\log 2\pi \right),$ where $d_{y}$ is the dimension of the measurement vector.[51] An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found. Information filter In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as: ${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {P} _{k\mid k}^{-1}\\{\hat {\mathbf {y} }}_{k\mid k}&=\mathbf {P} _{k\mid k}^{-1}{\hat {\mathbf {x} }}_{k\mid k}\end{aligned}}$ Similarly the predicted covariance and state have equivalent information forms, defined as: ${\begin{aligned}\mathbf {Y} _{k\mid k-1}&=\mathbf {P} _{k\mid k-1}^{-1}\\{\hat {\mathbf {y} }}_{k\mid k-1}&=\mathbf {P} _{k\mid k-1}^{-1}{\hat {\mathbf {x} }}_{k\mid k-1}\end{aligned}}$ as have the measurement covariance and measurement vector, which are defined as: ${\begin{aligned}\mathbf {I} _{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {R} _{k}^{-1}\mathbf {H} _{k}\\\mathbf {i} _{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {R} _{k}^{-1}\mathbf {z} _{k}\end{aligned}}$ The information update now becomes a trivial sum.[52] ${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {Y} _{k\mid k-1}+\mathbf {I} _{k}\\{\hat {\mathbf {y} }}_{k\mid k}&={\hat {\mathbf {y} }}_{k\mid k-1}+\mathbf {i} _{k}\end{aligned}}$ The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors. ${\begin{aligned}\mathbf {Y} _{k\mid k}&=\mathbf {Y} _{k\mid k-1}+\sum _{j=1}^{N}\mathbf {I} _{k,j}\\{\hat {\mathbf {y} }}_{k\mid k}&={\hat {\mathbf {y} }}_{k\mid k-1}+\sum _{j=1}^{N}\mathbf {i} _{k,j}\end{aligned}}$ To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.[52] ${\begin{aligned}\mathbf {M} _{k}&=\left[\mathbf {F} _{k}^{-1}\right]^{\textsf {T}}\mathbf {Y} _{k-1\mid k-1}\mathbf {F} _{k}^{-1}\\\mathbf {C} _{k}&=\mathbf {M} _{k}\left[\mathbf {M} _{k}+\mathbf {Q} _{k}^{-1}\right]^{-1}\\\mathbf {L} _{k}&=\mathbf {I} -\mathbf {C} _{k}\\\mathbf {Y} _{k\mid k-1}&=\mathbf {L} _{k}\mathbf {M} _{k}+\mathbf {C} _{k}\mathbf {Q} _{k}^{-1}\mathbf {C} _{k}^{\textsf {T}}\\{\hat {\mathbf {y} }}_{k\mid k-1}&=\mathbf {L} _{k}\left[\mathbf {F} _{k}^{-1}\right]^{\textsf {T}}{\hat {\mathbf {y} }}_{k-1\mid k-1}\end{aligned}}$ Fixed-lag smoother The optimal fixed-lag smoother provides the optimal estimate of ${\hat {\mathbf {x} }}_{k-N\mid k}$ for a given fixed-lag $N$ using the measurements from $\mathbf {z} _{1}$ to $\mathbf {z} _{k}$.[53] It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following: ${\begin{bmatrix}{\hat {\mathbf {x} }}_{t\mid t}\\{\hat {\mathbf {x} }}_{t-1\mid t}\\\vdots \\{\hat {\mathbf {x} }}_{t-N+1\mid t}\\\end{bmatrix}}={\begin{bmatrix}\mathbf {I} \\0\\\vdots \\0\\\end{bmatrix}}{\hat {\mathbf {x} }}_{t\mid t-1}+{\begin{bmatrix}0&\ldots &0\\\mathbf {I} &0&\vdots \\\vdots &\ddots &\vdots \\0&\ldots &\mathbf {I} \\\end{bmatrix}}{\begin{bmatrix}{\hat {\mathbf {x} }}_{t-1\mid t-1}\\{\hat {\mathbf {x} }}_{t-2\mid t-1}\\\vdots \\{\hat {\mathbf {x} }}_{t-N+1\mid t-1}\\\end{bmatrix}}+{\begin{bmatrix}\mathbf {K} ^{(0)}\\\mathbf {K} ^{(1)}\\\vdots \\\mathbf {K} ^{(N-1)}\\\end{bmatrix}}\mathbf {y} _{t\mid t-1}$ where: • ${\hat {\mathbf {x} }}_{t\mid t-1}$ is estimated via a standard Kalman filter; • $\mathbf {y} _{t\mid t-1}=\mathbf {z} _{t}-\mathbf {H} {\hat {\mathbf {x} }}_{t\mid t-1}$ is the innovation produced considering the estimate of the standard Kalman filter; • the various ${\hat {\mathbf {x} }}_{t-i\mid t}$ with $i=1,\ldots ,N-1$ are new variables; i.e., they do not appear in the standard Kalman filter; • the gains are computed via the following scheme: $\mathbf {K} ^{(i+1)}=\mathbf {P} ^{(i)}\mathbf {H} ^{\textsf {T}}\left[\mathbf {H} \mathbf {P} \mathbf {H} ^{\textsf {T}}+\mathbf {R} \right]^{-1}$ and $\mathbf {P} ^{(i)}=\mathbf {P} \left[\left(\mathbf {F} -\mathbf {K} \mathbf {H} \right)^{\textsf {T}}\right]^{i}$ where $\mathbf {P} $ and $\mathbf {K} $ are the prediction error covariance and the gains of the standard Kalman filter (i.e., $\mathbf {P} _{t\mid t-1}$). If the estimation error covariance is defined so that $\mathbf {P} _{i}:=E\left[\left(\mathbf {x} _{t-i}-{\hat {\mathbf {x} }}_{t-i\mid t}\right)^{*}\left(\mathbf {x} _{t-i}-{\hat {\mathbf {x} }}_{t-i\mid t}\right)\mid z_{1}\ldots z_{t}\right],$ then we have that the improvement on the estimation of $\mathbf {x} _{t-i}$ is given by: $\mathbf {P} -\mathbf {P} _{i}=\sum _{j=0}^{i}\left[\mathbf {P} ^{(j)}\mathbf {H} ^{\textsf {T}}\left(\mathbf {H} \mathbf {P} \mathbf {H} ^{\textsf {T}}+\mathbf {R} \right)^{-1}\mathbf {H} \left(\mathbf {P} ^{(i)}\right)^{\textsf {T}}\right]$ Fixed-interval smoothers The optimal fixed-interval smoother provides the optimal estimate of ${\hat {\mathbf {x} }}_{k\mid n}$ ($k<n$) using the measurements from a fixed interval $\mathbf {z} _{1}$ to $\mathbf {z} _{n}$. This is also called "Kalman Smoothing". There are several smoothing algorithms in common use. Rauch–Tung–Striebel The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.[54] The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates ${\hat {\mathbf {x} }}_{k\mid k-1}$, ${\hat {\mathbf {x} }}_{k\mid k}$ and covariances $\mathbf {P} _{k\mid k-1}$, $\mathbf {P} _{k\mid k}$ are saved for use in the backward pass (for retrodiction). In the backward pass, we compute the smoothed state estimates ${\hat {\mathbf {x} }}_{k\mid n}$ and covariances $\mathbf {P} _{k\mid n}$. We start at the last time step and proceed backward in time using the following recursive equations: ${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid n}&={\hat {\mathbf {x} }}_{k\mid k}+\mathbf {C} _{k}\left({\hat {\mathbf {x} }}_{k+1\mid n}-{\hat {\mathbf {x} }}_{k+1\mid k}\right)\\\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k}+\mathbf {C} _{k}\left(\mathbf {P} _{k+1\mid n}-\mathbf {P} _{k+1\mid k}\right)\mathbf {C} _{k}^{\textsf {T}}\end{aligned}}$ where $\mathbf {C} _{k}=\mathbf {P} _{k\mid k}\mathbf {F} _{k+1}^{\textsf {T}}\mathbf {P} _{k+1\mid k}^{-1}.$ $\mathbf {x} _{k\mid k}$ is the a-posteriori state estimate of timestep $k$ and $\mathbf {x} _{k+1\mid k}$ is the a-priori state estimate of timestep $k+1$. The same notation applies to the covariance. Modified Bryson–Frazier smoother An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman.[44] This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive computation of data which are used at each observation time to compute the smoothed state and covariance. The recursive equations are ${\begin{aligned}{\tilde {\Lambda }}_{k}&=\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\mathbf {H} _{k}+{\hat {\mathbf {C} }}_{k}^{\textsf {T}}{\hat {\Lambda }}_{k}{\hat {\mathbf {C} }}_{k}\\{\hat {\Lambda }}_{k-1}&=\mathbf {F} _{k}^{\textsf {T}}{\tilde {\Lambda }}_{k}\mathbf {F} _{k}\\{\hat {\Lambda }}_{n}&=0\\{\tilde {\lambda }}_{k}&=-\mathbf {H} _{k}^{\textsf {T}}\mathbf {S} _{k}^{-1}\mathbf {y} _{k}+{\hat {\mathbf {C} }}_{k}^{\textsf {T}}{\hat {\lambda }}_{k}\\{\hat {\lambda }}_{k-1}&=\mathbf {F} _{k}^{\textsf {T}}{\tilde {\lambda }}_{k}\\{\hat {\lambda }}_{n}&=0\end{aligned}}$ where $\mathbf {S} _{k}$ is the residual covariance and ${\hat {\mathbf {C} }}_{k}=\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}$. The smoothed state and covariance can then be found by substitution in the equations ${\begin{aligned}\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k}-\mathbf {P} _{k\mid k}{\hat {\Lambda }}_{k}\mathbf {P} _{k\mid k}\\\mathbf {x} _{k\mid n}&=\mathbf {x} _{k\mid k}-\mathbf {P} _{k\mid k}{\hat {\lambda }}_{k}\end{aligned}}$ or ${\begin{aligned}\mathbf {P} _{k\mid n}&=\mathbf {P} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}{\tilde {\Lambda }}_{k}\mathbf {P} _{k\mid k-1}\\\mathbf {x} _{k\mid n}&=\mathbf {x} _{k\mid k-1}-\mathbf {P} _{k\mid k-1}{\tilde {\lambda }}_{k}.\end{aligned}}$ An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Minimum-variance smoother The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely.[55] This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter. The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by ${\begin{aligned}{\hat {\mathbf {x} }}_{k+1\mid k}&=(\mathbf {F} _{k}-\mathbf {K} _{k}\mathbf {H} _{k}){\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\mathbf {z} _{k}\\\alpha _{k}&=-\mathbf {S} _{k}^{-{\frac {1}{2}}}\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {S} _{k}^{-{\frac {1}{2}}}\mathbf {z} _{k}\end{aligned}}$ The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass $\beta _{k}$ may be calculated by operating the forward equations on the time-reversed $\alpha _{k}$ and time reversing the result. In the case of output estimation, the smoothed estimate is given by ${\hat {\mathbf {y} }}_{k\mid N}=\mathbf {z} _{k}-\mathbf {R} _{k}\beta _{k}$ Taking the causal part of this minimum-variance smoother yields ${\hat {\mathbf {y} }}_{k\mid k}=\mathbf {z} _{k}-\mathbf {R} _{k}\mathbf {S} _{k}^{-{\frac {1}{2}}}\alpha _{k}$ which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly. A continuous-time version of the above smoother is described in.[56][57] Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.[58] In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering). Frequency-weighted Kalman filters Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest. Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let $\mathbf {y} -{\hat {\mathbf {y} }}$ denote the output estimation error exhibited by a conventional Kalman filter. Also, let $\mathbf {W} $ denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of $\mathbf {W} \left(\mathbf {y} -{\hat {\mathbf {y} }}\right)$ arises by simply constructing $\mathbf {W} ^{-1}{\hat {\mathbf {y} }}$. The design of $\mathbf {W} $ remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting $\mathbf {W} $ equal to the inverse of that system.[59] This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers. Nonlinear filters The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both. The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.[60] Extended Kalman filter Main article: Extended Kalman filter In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type. ${\begin{aligned}\mathbf {x} _{k}&=f(\mathbf {x} _{k-1},\mathbf {u} _{k})+\mathbf {w} _{k}\\\mathbf {z} _{k}&=h(\mathbf {x} _{k})+\mathbf {v} _{k}\end{aligned}}$ The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate. Unscented Kalman filter When the state transition and observation models—that is, the predict and update functions $f$ and $h$—are highly nonlinear, the extended Kalman filter can give particularly poor performance.[61] [62] This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) [61] uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way.[63] For certain systems, the resulting UKF more accurately estimates the true mean and covariance.[64] This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable). Sigma points For a random vector $\mathbf {x} =(x_{1},\dots ,x_{L})$, sigma points are any set of vectors $\{\mathbf {s} _{0},\dots ,\mathbf {s} _{N}\}={\bigl \{}{\begin{pmatrix}s_{0,1}&s_{0,2}&\ldots &s_{0,L}\end{pmatrix}},\dots ,{\begin{pmatrix}s_{N,1}&s_{N,2}&\ldots &s_{N,L}\end{pmatrix}}{\bigr \}}$ attributed with • first-order weights $W_{0}^{a},\dots ,W_{N}^{a}$ that fulfill 1. $\sum _{j=0}^{N}W_{j}^{a}=1$ 2. for all $i=1,\dots ,L$: $E[x_{i}]=\sum _{j=0}^{N}W_{j}^{a}s_{j,i}$ • second-order weights $W_{0}^{c},\dots ,W_{N}^{c}$ that fulfill 1. $\sum _{j=0}^{N}W_{j}^{c}=1$ 2. for all pairs $(i,l)\in \{1,\dots ,L\}^{2}:E[x_{i}x_{l}]=\sum _{j=0}^{N}W_{j}^{c}s_{j,i}s_{j,l}$. A simple choice of sigma points and weights for $\mathbf {x} _{k-1\mid k-1}$ in the UKF algorithm is ${\begin{aligned}\mathbf {s} _{0}&={\hat {\mathbf {x} }}_{k-1\mid k-1}\\-1&<W_{0}^{a}=W_{0}^{c}<1\\\mathbf {s} _{j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}+{\sqrt {\frac {L}{1-W_{0}}}}\mathbf {A} _{j},\quad j=1,\dots ,L\\\mathbf {s} _{L+j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}-{\sqrt {\frac {L}{1-W_{0}}}}\mathbf {A} _{j},\quad j=1,\dots ,L\\W_{j}^{a}&=W_{j}^{c}={\frac {1-W_{0}}{2L}},\quad j=1,\dots ,2L\end{aligned}}$ where ${\hat {\mathbf {x} }}_{k-1\mid k-1}$ is the mean estimate of $\mathbf {x} _{k-1\mid k-1}$. The vector $\mathbf {A} _{j}$ is the jth column of $\mathbf {A} $ where $\mathbf {P} _{k-1\mid k-1}=\mathbf {AA} ^{\textsf {T}}$. Typically, $\mathbf {A} $ is obtained via Cholesky decomposition of $\mathbf {P} _{k-1\mid k-1}$. With some care the filter equations can be expressed in such a way that $\mathbf {A} $ is evaluated directly without intermediate calculations of $\mathbf {P} _{k-1\mid k-1}$. This is referred to as the square-root unscented Kalman filter.[65] The weight of the mean value, $W_{0}$, can be chosen arbitrarily. Another popular parameterization (which generalizes the above) is ${\begin{aligned}\mathbf {s} _{0}&={\hat {\mathbf {x} }}_{k-1\mid k-1}\\W_{0}^{a}&={\frac {\alpha ^{2}\kappa -L}{\alpha ^{2}\kappa }}\\W_{0}^{c}&=W_{0}^{a}+1-\alpha ^{2}+\beta \\\mathbf {s} _{j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}+\alpha {\sqrt {\kappa }}\mathbf {A} _{j},\quad j=1,\dots ,L\\\mathbf {s} _{L+j}&={\hat {\mathbf {x} }}_{k-1\mid k-1}-\alpha {\sqrt {\kappa }}\mathbf {A} _{j},\quad j=1,\dots ,L\\W_{j}^{a}&=W_{j}^{c}={\frac {1}{2\alpha ^{2}\kappa }},\quad j=1,\dots ,2L.\end{aligned}}$ $\alpha $ and $\kappa $ control the spread of the sigma points. $\beta $ is related to the distribution of $x$. Appropriate values depend on the problem at hand, but a typical recommendation is $\alpha =10^{-3}$, $\kappa =1$, and $\beta =2$. However, a larger value of $\alpha $ (e.g., $\alpha =1$) may be beneficial in order to better capture the spread of the distribution and possible nonlinearities.[66] If the true distribution of $x$ is Gaussian, $\beta =2$ is optimal.[67] Predict As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa. Given estimates of the mean and covariance, ${\hat {\mathbf {x} }}_{k-1\mid k-1}$ and $\mathbf {P} _{k-1\mid k-1}$, one obtains $N=2L+1$ sigma points as described in the section above. The sigma points are propagated through the transition function f. $\mathbf {x} _{j}=f\left(\mathbf {s} _{j}\right)\quad j=0,\dots ,2L$. The propagated sigma points are weighed to produce the predicted mean and covariance. ${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid k-1}&=\sum _{j=0}^{2L}W_{j}^{a}\mathbf {x} _{j}\\\mathbf {P} _{k\mid k-1}&=\sum _{j=0}^{2L}W_{j}^{c}\left(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)\left(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k\mid k-1}\right)^{\textsf {T}}+\mathbf {Q} _{k}\end{aligned}}$ where $W_{j}^{a}$ are the first-order weights of the original sigma points, and $W_{j}^{c}$ are the second-order weights. The matrix $\mathbf {Q} _{k}$ is the covariance of the transition noise, $\mathbf {w} _{k}$. Update Given prediction estimates ${\hat {\mathbf {x} }}_{k\mid k-1}$ and $\mathbf {P} _{k\mid k-1}$, a new set of $N=2L+1$ sigma points $\mathbf {s} _{0},\dots ,\mathbf {s} _{2L}$ with corresponding first-order weights $W_{0}^{a},\dots W_{2L}^{a}$ and second-order weights $W_{0}^{c},\dots ,W_{2L}^{c}$ is calculated.[68] These sigma points are transformed through the measurement function $h$. $\mathbf {z} _{j}=h(\mathbf {s} _{j}),\,\,j=0,1,\dots ,2L$. Then the empirical mean and covariance of the transformed points are calculated. ${\begin{aligned}{\hat {\mathbf {z} }}&=\sum _{j=0}^{2L}W_{j}^{a}\mathbf {z} _{j}\\[6pt]{\hat {\mathbf {S} }}_{k}&=\sum _{j=0}^{2L}W_{j}^{c}(\mathbf {z} _{j}-{\hat {\mathbf {z} }})(\mathbf {z} _{j}-{\hat {\mathbf {z} }})^{\textsf {T}}+\mathbf {R_{k}} \end{aligned}}$ where $\mathbf {R} _{k}$ is the covariance matrix of the observation noise, $\mathbf {v} _{k}$. Additionally, the cross covariance matrix is also needed ${\begin{aligned}\mathbf {C_{xz}} &=\sum _{j=0}^{2L}W_{j}^{c}(\mathbf {x} _{j}-{\hat {\mathbf {x} }}_{k|k-1})(\mathbf {z} _{j}-{\hat {\mathbf {z} }})^{\textsf {T}}.\end{aligned}}$ The Kalman gain is ${\begin{aligned}\mathbf {K} _{k}=\mathbf {C_{xz}} {\hat {\mathbf {S} }}_{k}^{-1}.\end{aligned}}$ The updated mean and covariance estimates are ${\begin{aligned}{\hat {\mathbf {x} }}_{k\mid k}&={\hat {\mathbf {x} }}_{k|k-1}+\mathbf {K} _{k}(\mathbf {z} _{k}-{\hat {\mathbf {z} }})\\\mathbf {P} _{k\mid k}&=\mathbf {P} _{k\mid k-1}-\mathbf {K} _{k}{\hat {\mathbf {S} }}_{k}\mathbf {K} _{k}^{\textsf {T}}.\end{aligned}}$ Discriminative Kalman filter When the observation model $p(\mathbf {z} _{k}\mid \mathbf {x} _{k})$ is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate $p(\mathbf {z} _{k}\mid \mathbf {x} _{k})\approx {\frac {p(\mathbf {x} _{k}\mid \mathbf {z} _{k})}{p(\mathbf {x} _{k})}}$ where $p(\mathbf {x} _{k}\mid \mathbf {z} _{k})\approx {\mathcal {N}}(g(\mathbf {z} _{k}),Q(\mathbf {z} _{k}))$ for nonlinear functions $g,Q$. This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations. Under a stationary state model ${\begin{aligned}p(\mathbf {x} _{1})&={\mathcal {N}}(0,\mathbf {T} ),\\p(\mathbf {x} _{k}\mid \mathbf {x} _{k-1})&={\mathcal {N}}(\mathbf {F} \mathbf {x} _{k-1},\mathbf {C} ),\end{aligned}}$ where $\mathbf {T} =\mathbf {F} \mathbf {T} \mathbf {F} ^{\intercal }+\mathbf {C} $, if $p(\mathbf {x} _{k}\mid \mathbf {z} _{1:k})\approx {\mathcal {N}}({\hat {\mathbf {x} }}_{k|k-1},\mathbf {P} _{k|k-1}),$ then given a new observation $\mathbf {z} _{k}$, it follows that[69] $p(\mathbf {x} _{k+1}\mid \mathbf {z} _{1:k+1})\approx {\mathcal {N}}({\hat {\mathbf {x} }}_{k+1|k},\mathbf {P} _{k+1|k})$ where ${\begin{aligned}\mathbf {M} _{k+1}&=\mathbf {F} \mathbf {P} _{k|k-1}\mathbf {F} ^{\intercal }+\mathbf {C} ,\\\mathbf {P} _{k+1|k}&=(\mathbf {M} _{k+1}^{-1}+Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1})^{-1},\\{\hat {\mathbf {x} }}_{k+1|k}&=\mathbf {P} _{k+1|k}(\mathbf {M} _{k+1}^{-1}\mathbf {F} {\hat {\mathbf {x} }}_{k|k-1}+\mathbf {P} _{k+1|k}^{-1}g(\mathbf {z} _{k})).\end{aligned}}$ Note that this approximation requires $Q(\mathbf {z} _{k})^{-1}-\mathbf {T} ^{-1}$ to be positive-definite; in the case that it is not, $\mathbf {P} _{k+1|k}=(\mathbf {M} _{k+1}^{-1}+Q(\mathbf {z} _{k})^{-1})^{-1}$ is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states[70] and can be used build filters that are particularly robust to nonstationarities in the observation model.[71] Adaptive Kalman filter Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model $\mathbf {F} (t)$, which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.[72] Kalman–Bucy filter Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.[73][74] It is based on the state space model ${\begin{aligned}{\frac {d}{dt}}\mathbf {x} (t)&=\mathbf {F} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {w} (t)\\\mathbf {z} (t)&=\mathbf {H} (t)\mathbf {x} (t)+\mathbf {v} (t)\end{aligned}}$ where $\mathbf {Q} (t)$ and $\mathbf {R} (t)$ represent the intensities (or, more accurately: the Power Spectral Density - PSD - matrices) of the two white noise terms $\mathbf {w} (t)$ and $\mathbf {v} (t)$, respectively. The filter consists of two differential equations, one for the state estimate and one for the covariance: ${\begin{aligned}{\frac {d}{dt}}{\hat {\mathbf {x} }}(t)&=\mathbf {F} (t){\hat {\mathbf {x} }}(t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {K} (t)\left(\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)\right)\\{\frac {d}{dt}}\mathbf {P} (t)&=\mathbf {F} (t)\mathbf {P} (t)+\mathbf {P} (t)\mathbf {F} ^{\textsf {T}}(t)+\mathbf {Q} (t)-\mathbf {K} (t)\mathbf {R} (t)\mathbf {K} ^{\textsf {T}}(t)\end{aligned}}$ where the Kalman gain is given by $\mathbf {K} (t)=\mathbf {P} (t)\mathbf {H} ^{\textsf {T}}(t)\mathbf {R} ^{-1}(t)$ Note that in this expression for $\mathbf {K} (t)$ the covariance of the observation noise $\mathbf {R} (t)$ represents at the same time the covariance of the prediction error (or innovation) ${\tilde {\mathbf {y} }}(t)=\mathbf {z} (t)-\mathbf {H} (t){\hat {\mathbf {x} }}(t)$; these covariances are equal only in the case of continuous time.[75] The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time. The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter. Hybrid Kalman filter Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by ${\begin{aligned}{\dot {\mathbf {x} }}(t)&=\mathbf {F} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)+\mathbf {w} (t),&\mathbf {w} (t)&\sim N\left(\mathbf {0} ,\mathbf {Q} (t)\right)\\\mathbf {z} _{k}&=\mathbf {H} _{k}\mathbf {x} _{k}+\mathbf {v} _{k},&\mathbf {v} _{k}&\sim N(\mathbf {0} ,\mathbf {R} _{k})\end{aligned}}$ where $\mathbf {x} _{k}=\mathbf {x} (t_{k})$. Initialize ${\hat {\mathbf {x} }}_{0\mid 0}=E\left[\mathbf {x} (t_{0})\right],\mathbf {P} _{0\mid 0}=\operatorname {Var} \left[\mathbf {x} \left(t_{0}\right)\right]$ Predict ${\begin{aligned}{\dot {\hat {\mathbf {x} }}}(t)&=\mathbf {F} (t){\hat {\mathbf {x} }}(t)+\mathbf {B} (t)\mathbf {u} (t){\text{, with }}{\hat {\mathbf {x} }}\left(t_{k-1}\right)={\hat {\mathbf {x} }}_{k-1\mid k-1}\\\Rightarrow {\hat {\mathbf {x} }}_{k\mid k-1}&={\hat {\mathbf {x} }}\left(t_{k}\right)\\{\dot {\mathbf {P} }}(t)&=\mathbf {F} (t)\mathbf {P} (t)+\mathbf {P} (t)\mathbf {F} (t)^{\textsf {T}}+\mathbf {Q} (t){\text{, with }}\mathbf {P} \left(t_{k-1}\right)=\mathbf {P} _{k-1\mid k-1}\\\Rightarrow \mathbf {P} _{k\mid k-1}&=\mathbf {P} \left(t_{k}\right)\end{aligned}}$ The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., $\mathbf {K} (t)=0$. The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step. For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials. Update ${\begin{aligned}\mathbf {K} _{k}&=\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}\left(\mathbf {H} _{k}\mathbf {P} _{k\mid k-1}\mathbf {H} _{k}^{\textsf {T}}+\mathbf {R} _{k}\right)^{-1}\\{\hat {\mathbf {x} }}_{k\mid k}&={\hat {\mathbf {x} }}_{k\mid k-1}+\mathbf {K} _{k}\left(\mathbf {z} _{k}-\mathbf {H} _{k}{\hat {\mathbf {x} }}_{k\mid k-1}\right)\\\mathbf {P} _{k\mid k}&=\left(\mathbf {I} -\mathbf {K} _{k}\mathbf {H} _{k}\right)\mathbf {P} _{k\mid k-1}\end{aligned}}$ The update equations are identical to those of the discrete-time Kalman filter. Variants for the recovery of sparse signals The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works[76][77][78] utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems. Relation to Gaussian processes Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression.[79] Applications • Attitude and heading reference systems • Autopilot • Electric battery state of charge (SoC) estimation[80][81] • Brain–computer interfaces[69][71][70] • Chaotic signals • Tracking and vertex fitting of charged particles in particle detectors[82] • Tracking of objects in computer vision • Dynamic positioning in shipping • Economics, in particular macroeconomics, time series analysis, and econometrics[83] • Inertial guidance system • Nuclear medicine – single photon emission computed tomography image restoration[84] • Orbit determination • Power system state estimation • Radar tracker • Satellite navigation systems • Seismology[85] • Sensorless control of AC motor variable-frequency drives • Simultaneous localization and mapping • Speech enhancement • Visual odometry • Weather forecasting • Navigation system • 3D modeling • Structural health monitoring • Human sensorimotor processing[86] See also • Alpha beta filter • Inverse-variance weighting • Covariance intersection • Data assimilation • Ensemble Kalman filter • Extended Kalman filter • Fast Kalman filter • Filtering problem (stochastic processes) • Generalized filtering • Invariant extended Kalman filter • Kernel adaptive filter • Masreliez's theorem • Moving horizon estimation • Particle filter estimator • PID controller • Predictor–corrector method • Recursive least squares filter • Schmidt–Kalman filter • Separation principle • Sliding mode control • State-transition matrix • Stochastic differential equations • Switching Kalman filter References 1. Stratonovich, R. L. (1959). Optimum nonlinear systems which bring about a separation of a signal with constant parameters from noise. Radiofizika, 2:6, pp. 892–901. 2. Stratonovich, R. L. (1959). On the theory of optimal non-linear filtering of random functions. Theory of Probability and Its Applications, 4, pp. 223–225. 3. Stratonovich, R. L. (1960) Application of the Markov processes theory to optimal filtering. Radio Engineering and Electronic Physics, 5:11, pp. 1–19. 4. Stratonovich, R. L. (1960). Conditional Markov Processes. Theory of Probability and Its Applications, 5, pp. 156–178. 5. Stepanov, O. A. (15 May 2011). "Kalman filtering: Past and present. An outlook from Russia. (On the occasion of the 80th birthday of Rudolf Emil Kalman)". Gyroscopy and Navigation. 2 (2): 105. doi:10.1134/S2075108711020076. S2CID 53120402. 6. Fauzi, Hilman; Batool, Uzma (15 July 2019). "A Three-bar Truss Design using Single-solution Simulated Kalman Filter Optimizer". Mekatronika. 1 (2): 98–102. doi:10.15282/mekatronika.v1i2.4991. S2CID 222355496. 7. Paul Zarchan; Howard Musoff (2000). Fundamentals of Kalman Filtering: A Practical Approach. American Institute of Aeronautics and Astronautics, Incorporated. ISBN 978-1-56347-455-2. 8. Lora-Millan, Julio S.; Hidalgo, Andres F.; Rocon, Eduardo (2021). "An IMUs-Based Extended Kalman Filter to Estimate Gait Lower Limb Sagittal Kinematics for the Control of Wearable Robotic Devices". IEEE Access. 9: 144540–144554. doi:10.1109/ACCESS.2021.3122160. ISSN 2169-3536. S2CID 239938971. 9. Kalita, Diana; Lyakhov, Pavel (December 2022). "Moving Object Detection Based on a Combination of Kalman Filter and Median Filtering". Big Data and Cognitive Computing. 6 (4): 142. doi:10.3390/bdcc6040142. ISSN 2504-2289. 10. Ghysels, Eric; Marcellino, Massimiliano (2018). Applied Economic Forecasting using Time Series Methods. New York, NY: Oxford University Press. p. 419. ISBN 978-0-19-062201-5. OCLC 1010658777. 11. Azzam, M. Abdullah; Batool, Uzma; Fauzi, Hilman (15 July 2019). "Design of an Helical Spring using Single-solution Simulated Kalman Filter Optimizer". Mekatronika. 1 (2): 93–97. doi:10.15282/mekatronika.v1i2.4990. S2CID 221855079. 12. Wolpert, Daniel; Ghahramani, Zoubin (2000). "Computational principles of movement neuroscience". Nature Neuroscience. 3: 1212–7. doi:10.1038/81497. PMID 11127840. S2CID 736756. 13. Kalman, R. E. (1960). "A New Approach to Linear Filtering and Prediction Problems". Journal of Basic Engineering. 82: 35–45. doi:10.1115/1.3662552. S2CID 1242324. 14. Humpherys, Jeffrey (2012). "A Fresh Look at the Kalman Filter". SIAM Review. 54 (4): 801–823. doi:10.1137/100799666. 15. Uhlmann, Jeffrey; Julier, Simon (2022). "Gaussianity and the Kalman Filter: A Simple Yet Complicated Relationship" (PDF). Journal de Ciencia e Ingeniería. 14 (1): 21–26. doi:10.46571/JCI.2022.1.2. S2CID 251143915. 16. Li, Wangyan; Wang, Zidong; Wei, Guoliang; Ma, Lifeng; Hu, Jun; Ding, Derui (2015). "A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks". Discrete Dynamics in Nature and Society. 2015: 1–12. doi:10.1155/2015/683701. ISSN 1026-0226. 17. Li, Wangyan; Wang, Zidong; Ho, Daniel W. C.; Wei, Guoliang (2019). "On Boundedness of Error Covariances for Kalman Consensus Filtering Problems". IEEE Transactions on Automatic Control. 65 (6): 2654–2661. doi:10.1109/TAC.2019.2942826. ISSN 0018-9286. S2CID 204196474. 18. Lauritzen, S. L. (December 1981). "Time series analysis in 1880. A discussion of contributions made by T.N. Thiele". International Statistical Review. 49 (3): 319–331. doi:10.2307/1402616. JSTOR 1402616. He derives a recursive procedure for estimating the regression component and predicting the Brownian motion. The procedure is now known as Kalman filtering. 19. Lauritzen, S. L. (2002). Thiele: Pioneer in Statistics. New York: Oxford University Press. p. 41. ISBN 978-0-19-850972-1. He solves the problem of estimating the regression coefficients and predicting the values of the Brownian motion by the method of least squares and gives an elegant recursive procedure for carrying out the calculations. The procedure is nowadays known as Kalman filtering. 20. "Mohinder S. Grewal and Angus P. Andrews" (PDF). Archived from the original (PDF) on 2016-03-07. Retrieved 2015-04-23. 21. Jerrold H. Suddath; Robert H. Kidd; Arnold G. Reinhold (August 1967). A Linearized Error Analysis Of Onboard Primary Navigation Systems For The Apollo Lunar Module, NASA TN D-4027 (PDF). {{cite book}}: |work= ignored (help) 22. Gaylor, David; Lightsey, E. Glenn (2003). "GPS/INS Kalman Filter Design for Spacecraft Operating in the Proximity of International Space Station". AIAA Guidance, Navigation, and Control Conference and Exhibit. doi:10.2514/6.2003-5445. ISBN 978-1-62410-090-1. 23. Ingvar Strid; Karl Walentin (April 2009). "Block Kalman Filtering for Large-Scale DSGE Models". Computational Economics. 33 (3): 277–304. CiteSeerX 10.1.1.232.3790. doi:10.1007/s10614-008-9160-4. hdl:10419/81929. S2CID 3042206. 24. Martin Møller Andreasen (2008). "Non-linear DSGE Models, The Central Difference Kalman Filter, and The Mean Shifted Particle Filter" (PDF). 25. Roweis, S; Ghahramani, Z (1999). "A unifying review of linear gaussian models" (PDF). Neural Computation. 11 (2): 305–45. doi:10.1162/089976699300016674. PMID 9950734. S2CID 2590898. 26. Hamilton, J. (1994), Time Series Analysis, Princeton University Press. Chapter 13, 'The Kalman Filter' 27. Ishihara, J.Y.; Terra, M.H.; Campos, J.C.T. (2006). "Robust Kalman Filter for Descriptor Systems". IEEE Transactions on Automatic Control. 51 (8): 1354. doi:10.1109/TAC.2006.878741. S2CID 12741796. 28. Terra, Marco H.; Cerri, Joao P.; Ishihara, Joao Y. (2014). "Optimal Robust Linear Quadratic Regulator for Systems Subject to Uncertainties". IEEE Transactions on Automatic Control. 59 (9): 2586–2591. doi:10.1109/TAC.2014.2309282. S2CID 8810105. 29. Kelly, Alonzo (1994). "A 3D state space formulation of a navigation Kalman filter for autonomous vehicles" (PDF). DTIC Document: 13. Archived (PDF) from the original on December 30, 2014. 2006 Corrected Version Archived 2017-01-10 at the Wayback Machine 30. Reid, Ian; Term, Hilary. "Estimation II" (PDF). www.robots.ox.ac.uk. Oxford University. Retrieved 6 August 2014. 31. Rajamani, Murali (October 2007). Data-based Techniques to Improve State Estimation in Model Predictive Control (PDF) (PhD Thesis). University of Wisconsin–Madison. Archived from the original (PDF) on 2016-03-04. Retrieved 2011-04-04. 32. Rajamani, Murali R.; Rawlings, James B. (2009). "Estimation of the disturbance structure from data using semidefinite programming and optimal weighting". Automatica. 45 (1): 142–148. doi:10.1016/j.automatica.2008.05.032. S2CID 5699674. 33. "Autocovariance Least-Squares Toolbox". Jbrwww.che.wisc.edu. Retrieved 2021-08-18. 34. Bania, P.; Baranowski, J. (12 December 2016). Field Kalman Filter and its approximation. IEEE 55th Conference on Decision and Control (CDC). Las Vegas, NV, USA: IEEE. pp. 2875–2880. 35. Bar-Shalom, Yaakov; Li, X.-Rong; Kirubarajan, Thiagalingam (2001). Estimation with Applications to Tracking and Navigation. New York, USA: John Wiley & Sons, Inc. pp. 319 ff. doi:10.1002/0471221279. ISBN 0-471-41655-X. 36. Three optimality tests with numerical examples are described in Peter, Matisko (2012). "Optimality Tests and Adaptive Kalman Filter". 16th IFAC Symposium on System Identification. pp. 1523–1528. doi:10.3182/20120711-3-BE-2027.00011. ISBN 978-3-902823-06-9. {{cite book}}: |journal= ignored (help) 37. Spall, James C. (1995). "The Kantorovich inequality for error analysis of the Kalman filter with unknown noise distributions". Automatica. 31 (10): 1513–1517. doi:10.1016/0005-1098(95)00069-9. 38. Maryak, J.L.; Spall, J.C.; Heydon, B.D. (2004). "Use of the Kalman Filter for Inference in State-Space Models with Unknown Noise Distributions". IEEE Transactions on Automatic Control. 49: 87–90. doi:10.1109/TAC.2003.821415. S2CID 21143516. 39. Walrand, Jean; Dimakis, Antonis (August 2006). Random processes in Systems -- Lecture Notes (PDF). pp. 69–70. 40. Sant, Donald T. "Generalized least squares applied to time varying parameter models." Annals of Economic and Social Measurement, Volume 6, number 3. NBER, 1977. 301-314. Online Pdf 41. Anderson, Brian D. O.; Moore, John B. (1979). Optimal Filtering. New York: Prentice Hall. pp. 129–133. ISBN 978-0-13-638122-8. 42. Jingyang Lu. "False information injection attack on dynamic state estimation in multi-sensor systems", Fusion 2014 43. Thornton, Catherine L. (15 October 1976). Triangular Covariance Factorizations for Kalman Filtering (PhD). NASA. NASA Technical Memorandum 33-798. 44. Bierman, G.J. (1977). "Factorization Methods for Discrete Sequential Estimation". Factorization Methods for Discrete Sequential Estimation. Bibcode:1977fmds.book.....B. 45. Bar-Shalom, Yaakov; Li, X. Rong; Kirubarajan, Thiagalingam (July 2001). Estimation with Applications to Tracking and Navigation. New York: John Wiley & Sons. pp. 308–317. ISBN 978-0-471-41655-5. 46. Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences (Third ed.). Baltimore, Maryland: Johns Hopkins University. p. 139. ISBN 978-0-8018-5414-9. 47. Higham, Nicholas J. (2002). Accuracy and Stability of Numerical Algorithms (Second ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. p. 680. ISBN 978-0-89871-521-7. 48. Särkkä, S.; Ángel F. García-Fernández (2021). "Temporal Parallelization of Bayesian Smoothers". IEEE Transactions on Automatic Control. 66 (1): 299–306. arXiv:1905.13002. doi:10.1109/TAC.2020.2976316. S2CID 213695560. 49. "Parallel Prefix Sum (Scan) with CUDA". developer.nvidia.com/. Retrieved 2020-02-21. The scan operation is a simple and powerful parallel primitive with a broad range of applications. In this chapter we have explained an efficient implementation of scan using CUDA, which achieves a significant speedup compared to a sequential implementation on a fast CPU, and compared to a parallel implementation in OpenGL on the same GPU. Due to the increasing power of commodity parallel processors such as GPUs, we expect to see data-parallel algorithms such as scan to increase in importance over the coming years. 50. Masreliez, C. Johan; Martin, R D (1977). "Robust Bayesian estimation for the linear model and robustifying the Kalman filter". IEEE Transactions on Automatic Control. 22 (3): 361–371. doi:10.1109/TAC.1977.1101538. 51. Lütkepohl, Helmut (1991). Introduction to Multiple Time Series Analysis. Heidelberg: Springer-Verlag Berlin. p. 435. 52. Gabriel T. Terejanu (2012-08-04). "Discrete Kalman Filter Tutorial" (PDF). Retrieved 2016-04-13. 53. Anderson, Brian D. O.; Moore, John B. (1979). Optimal Filtering. Englewood Cliffs, NJ: Prentice Hall, Inc. pp. 176–190. ISBN 978-0-13-638122-8. 54. Rauch, H.E.; Tung, F.; Striebel, C. T. (August 1965). "Maximum likelihood estimates of linear dynamic systems". AIAA Journal. 3 (8): 1445–1450. Bibcode:1965AIAAJ...3.1445.. doi:10.2514/3.3166. 55. Einicke, G.A. (March 2006). "Optimal and Robust Noncausal Filter Formulations". IEEE Transactions on Signal Processing. 54 (3): 1069–1077. Bibcode:2006ITSP...54.1069E. doi:10.1109/TSP.2005.863042. S2CID 15376718. 56. Einicke, G.A. (April 2007). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Transactions on Signal Processing. 55 (4): 1543–1547. Bibcode:2007ITSP...55.1543E. doi:10.1109/TSP.2006.889402. S2CID 16218530. 57. Einicke, G.A.; Ralston, J.C.; Hargrave, C.O.; Reid, D.C.; Hainsworth, D.W. (December 2008). "Longwall Mining Automation. An Application of Minimum-Variance Smoothing". IEEE Control Systems Magazine. 28 (6): 28–37. doi:10.1109/MCS.2008.929281. S2CID 36072082. 58. Einicke, G.A. (December 2009). "Asymptotic Optimality of the Minimum-Variance Fixed-Interval Smoother". IEEE Transactions on Automatic Control. 54 (12): 2904–2908. Bibcode:2007ITSP...55.1543E. doi:10.1109/TSP.2006.889402. S2CID 16218530. 59. Einicke, G.A. (December 2014). "Iterative Frequency-Weighted Filtering and Smoothing Procedures". IEEE Signal Processing Letters. 21 (12): 1467–1470. Bibcode:2014ISPL...21.1467E. doi:10.1109/LSP.2014.2341641. S2CID 13569109. 60. Biswas, Sanat K.; Qiao, Li; Dempster, Andrew G. (2020-12-01). "A quantified approach of predicting suitability of using the Unscented Kalman Filter in a non-linear application". Automatica. 122: 109241. doi:10.1016/j.automatica.2020.109241. ISSN 0005-1098. S2CID 225028760. 61. Julier, Simon J.; Uhlmann, Jeffrey K. (2004). "Unscented filtering and nonlinear estimation". Proceedings of the IEEE. 92 (3): 401–422. doi:10.1109/JPROC.2003.823141. S2CID 9614092. 62. Julier, Simon J.; Uhlmann, Jeffrey K. (1997). "New extension of the Kalman filter to nonlinear systems" (PDF). In Kadar, Ivan (ed.). Signal Processing, Sensor Fusion, and Target Recognition VI. Proceedings of SPIE. Vol. 3. pp. 182–193. Bibcode:1997SPIE.3068..182J. CiteSeerX 10.1.1.5.2891. doi:10.1117/12.280797. S2CID 7937456. Retrieved 2008-05-03. 63. Menegaz, H. M. T.; Ishihara, J. Y.; Borges, G. A.; Vargas, A. N. (October 2015). "A Systematization of the Unscented Kalman Filter Theory". IEEE Transactions on Automatic Control. 60 (10): 2583–2598. doi:10.1109/tac.2015.2404511. hdl:20.500.11824/251. ISSN 0018-9286. S2CID 12606055. 64. Gustafsson, Fredrik; Hendeby, Gustaf (2012). "Some Relations Between Extended and Unscented Kalman Filters". IEEE Transactions on Signal Processing. 60 (2): 545–555. Bibcode:2012ITSP...60..545G. doi:10.1109/tsp.2011.2172431. S2CID 17876531. 65. Van der Merwe, R.; Wan, E.A. (2001). "The square-root unscented Kalman filter for state and parameter-estimation". 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221). Vol. 6. pp. 3461–3464. doi:10.1109/ICASSP.2001.940586. ISBN 0-7803-7041-4. S2CID 7290857. 66. Bitzer, S. (2016). "The UKF exposed: How it works, when it works and when it's better to sample". doi:10.5281/zenodo.44386. {{cite journal}}: Cite journal requires |journal= (help) 67. Wan, E.A.; Van Der Merwe, R. (2000). "The unscented Kalman filter for nonlinear estimation" (PDF). Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373). p. 153. CiteSeerX 10.1.1.361.9373. doi:10.1109/ASSPCC.2000.882463. ISBN 978-0-7803-5800-3. S2CID 13992571. 68. Sarkka, Simo (September 2007). "On Unscented Kalman Filtering for State Estimation of Continuous-Time Nonlinear Systems". IEEE Transactions on Automatic Control. 52 (9): 1631–1641. doi:10.1109/TAC.2007.904453. 69. Burkhart, Michael C.; Brandman, David M.; Franco, Brian; Hochberg, Leigh; Harrison, Matthew T. (2020). "The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models". Neural Computation. 32 (5): 969–1017. doi:10.1162/neco_a_01275. PMC 8259355. PMID 32187000. S2CID 212748230. Retrieved 26 March 2021. 70. Burkhart, Michael C. (2019). A Discriminative Approach to Bayesian Filtering with Applications to Human Neural Decoding (Thesis). Providence, RI, USA: Brown University. doi:10.26300/nhfp-xv22. Retrieved 26 March 2021. 71. Brandman, David M.; Burkhart, Michael C.; Kelemen, Jessica; Franco, Brian; Harrison, Matthew T.; Hochberg, Leigh R. (2018). "Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression". Neural Computation. 30 (11): 2986–3008. doi:10.1162/neco_a_01129. PMC 6685768. PMID 30216140. Retrieved 26 March 2021. 72. Bar-Shalom, Yaakov; Li, X.-Rong; Kirubarajan, Thiagalingam (2001). Estimation with Applications to Tracking and Navigation. New York, USA: John Wiley & Sons, Inc. pp. 421 ff. doi:10.1002/0471221279. ISBN 0-471-41655-X. 73. Bucy, R.S. and Joseph, P.D., Filtering for Stochastic Processes with Applications to Guidance, John Wiley & Sons, 1968; 2nd Edition, AMS Chelsea Publ., 2005. ISBN 0-8218-3782-6 74. Jazwinski, Andrew H., Stochastic processes and filtering theory, Academic Press, New York, 1970. ISBN 0-12-381550-9 75. Kailath, T. (1968). "An innovations approach to least-squares estimation--Part I: Linear filtering in additive white noise". IEEE Transactions on Automatic Control. 13 (6): 646–655. doi:10.1109/TAC.1968.1099025. 76. Vaswani, Namrata (2008). "Kalman filtered Compressed Sensing". 2008 15th IEEE International Conference on Image Processing. pp. 893–896. arXiv:0804.0819. doi:10.1109/ICIP.2008.4711899. ISBN 978-1-4244-1765-0. S2CID 9282476. 77. Carmi, Avishy; Gurfil, Pini; Kanevsky, Dimitri (2010). "Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms". IEEE Transactions on Signal Processing. 58 (4): 2405–2409. Bibcode:2010ITSP...58.2405C. doi:10.1109/TSP.2009.2038959. S2CID 10569233. 78. Zachariah, Dave; Chatterjee, Saikat; Jansson, Magnus (2012). "Dynamic Iterative Pursuit". IEEE Transactions on Signal Processing. 60 (9): 4967–4972. arXiv:1206.2496. Bibcode:2012ITSP...60.4967Z. doi:10.1109/TSP.2012.2203813. S2CID 18467024. 79. Särkkä, Simo; Hartikainen, Jouni; Svensson, Lennart; Sandblom, Fredrik (2015-04-22). "On the relation between Gaussian process quadratures and sigma-point methods". arXiv:1504.05994 [stat.ME]. 80. Vasebi, Amir; Partovibakhsh, Maral; Bathaee, S. Mohammad Taghi (2007). "A novel combined battery model for state-of-charge estimation in lead-acid batteries based on extended Kalman filter for hybrid electric vehicle applications". Journal of Power Sources. 174 (1): 30–40. Bibcode:2007JPS...174...30V. doi:10.1016/j.jpowsour.2007.04.011. 81. Vasebi, A.; Bathaee, S.M.T.; Partovibakhsh, M. (2008). "Predicting state of charge of lead-acid batteries for hybrid electric vehicles by extended Kalman filter". Energy Conversion and Management. 49: 75–82. doi:10.1016/j.enconman.2007.05.017. 82. Fruhwirth, R. (1987). "Application of Kalman filtering to track and vertex fitting". Nuclear Instruments and Methods in Physics Research Section A. 262 (2–3): 444–450. Bibcode:1987NIMPA.262..444F. doi:10.1016/0168-9002(87)90887-4. 83. Harvey, Andrew C. (1994). "Applications of the Kalman filter in econometrics". In Bewley, Truman (ed.). Advances in Econometrics. New York: Cambridge University Press. pp. 285f. ISBN 978-0-521-46726-1. 84. Boulfelfel, D.; Rangayyan, R.M.; Hahn, L.J.; Kloiber, R.; Kuduvalli, G.R. (1994). "Two-dimensional restoration of single photon emission computed tomography images using the Kalman filter". IEEE Transactions on Medical Imaging. 13 (1): 102–109. doi:10.1109/42.276148. PMID 18218487. 85. Bock, Y.; Crowell, B.; Webb, F.; Kedar, S.; Clayton, R.; Miyahara, B. (2008). "Fusion of High-Rate GPS and Seismic Data: Applications to Early Warning Systems for Mitigation of Geological Hazards". AGU Fall Meeting Abstracts. 43: G43B–01. Bibcode:2008AGUFM.G43B..01B. 86. Wolpert, D. M.; Miall, R. C. (1996). "Forward Models for Physiological Motor Control". Neural Networks. 9 (8): 1265–1279. doi:10.1016/S0893-6080(96)00035-4. PMID 12662535. Further reading • Einicke, G.A. (2019). Smoothing, Filtering and Prediction: Estimating the Past, Present and Future (2nd ed.). Amazon Prime Publishing. ISBN 978-0-6485115-0-2. • Jinya Su; Baibing Li; Wen-Hua Chen (2015). "On existence, optimality and asymptotic stability of the Kalman filter with partially observed inputs". Automatica. 53: 149–154. doi:10.1016/j.automatica.2014.12.044. • Gelb, A. (1974). Applied Optimal Estimation. MIT Press. • Kalman, R.E. (1960). "A new approach to linear filtering and prediction problems" (PDF). Journal of Basic Engineering. 82 (1): 35–45. doi:10.1115/1.3662552. S2CID 1242324. Archived from the original (PDF) on 2008-05-29. Retrieved 2008-05-03. • Kalman, R.E.; Bucy, R.S. (1961). "New Results in Linear Filtering and Prediction Theory". Journal of Basic Engineering. 83: 95–108. CiteSeerX 10.1.1.361.6851. doi:10.1115/1.3658902. S2CID 8141345. • Harvey, A.C. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press. ISBN 9780521405737. • Roweis, S.; Ghahramani, Z. (1999). "A Unifying Review of Linear Gaussian Models" (PDF). Neural Computation. 11 (2): 305–345. doi:10.1162/089976699300016674. PMID 9950734. S2CID 2590898. • Simon, D. (2006). Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. Wiley-Interscience. • Warwick, K. (1987). "Optimal observers for ARMA models". International Journal of Control. 46 (5): 1493–1503. doi:10.1080/00207178708933989. • Bierman, G.J. (1977). Factorization Methods for Discrete Sequential Estimation. ISBN 978-0-486-44981-4. {{cite book}}: |journal= ignored (help) • Bozic, S.M. (1994). Digital and Kalman filtering. Butterworth–Heinemann. • Haykin, S. (2002). Adaptive Filter Theory. Prentice Hall. • Liu, W.; Principe, J.C. and Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. John Wiley.{{cite book}}: CS1 maint: multiple names: authors list (link) • Manolakis, D.G. (1999). Statistical and Adaptive signal processing. Artech House. • Welch, Greg; Bishop, Gary (1997). "SCAAT: incremental tracking with incomplete information" (PDF). SIGGRAPH '97 Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co. pp. 333–344. doi:10.1145/258734.258876. ISBN 978-0-89791-896-1. S2CID 1512754. • Jazwinski, Andrew H. (1970). Stochastic Processes and Filtering. Mathematics in Science and Engineering. New York: Academic Press. p. 376. ISBN 978-0-12-381550-7. • Maybeck, Peter S. (1979). "Chapter 1" (PDF). Stochastic Models, Estimation, and Control. Mathematics in Science and Engineering. Vol. 141–1. New York: Academic Press. ISBN 978-0-12-480701-3. • Moriya, N. (2011). Primer to Kalman Filtering: A Physicist Perspective. New York: Nova Science Publishers, Inc. ISBN 978-1-61668-311-5. • Dunik, J.; Simandl M.; Straka O. (2009). "Methods for Estimating State and Measurement Noise Covariance Matrices: Aspects and Comparison". 15th IFAC Symposium on System Identification, 2009. 15th IFAC Symposium on System Identification, 2009. France. pp. 372–377. doi:10.3182/20090706-3-FR-2004.00061. ISBN 978-3-902661-47-0.{{cite book}}: CS1 maint: location missing publisher (link) • Chui, Charles K.; Chen, Guanrong (2009). Kalman Filtering with Real-Time Applications. Springer Series in Information Sciences. Vol. 17 (4th ed.). New York: Springer. p. 229. ISBN 978-3-540-87848-3. • Spivey, Ben; Hedengren, J. D. and Edgar, T. F. (2010). "Constrained Nonlinear Estimation for Industrial Process Fouling". Industrial & Engineering Chemistry Research. 49 (17): 7824–7831. doi:10.1021/ie9018116.{{cite journal}}: CS1 maint: multiple names: authors list (link) • Thomas Kailath; Ali H. Sayed; Babak Hassibi (2000). Linear Estimation. NJ: Prentice–Hall. ISBN 978-0-13-022464-4. • Ali H. Sayed (2008). Adaptive Filters. NJ: Wiley. ISBN 978-0-470-25388-5.{{cite book}}: CS1 maint: uses authors parameter (link) External links • A New Approach to Linear Filtering and Prediction Problems, by R. E. Kalman, 1960 • Kalman and Bayesian Filters in Python. Open source Kalman filtering textbook. • How a Kalman filter works, in pictures. Illuminates the Kalman filter with pictures and colors • Kalman–Bucy Filter, a derivation of the Kalman–Bucy Filter • MIT Video Lecture on the Kalman filter on YouTube • An Introduction to the Kalman Filter, SIGGRAPH 2001 Course, Greg Welch and Gary Bishop • Kalman Filter webpage, with many links • Kalman Filter Explained Simply, Step-by-Step Tutorial of the Kalman Filter with Equations • "Kalman filters used in Weather models" (PDF). SIAM News. 36 (8). October 2003. Archived from the original (PDF) on 2011-05-17. Retrieved 2007-01-27. • Haseltine, Eric L.; Rawlings, James B. (2005). "Critical Evaluation of Extended Kalman Filtering and Moving-Horizon Estimation". Industrial & Engineering Chemistry Research. 44 (8): 2451. doi:10.1021/ie034308l. • Gerald J. Bierman's Estimation Subroutine Library: Corresponds to the code in the research monograph "Factorization Methods for Discrete Sequential Estimation" originally published by Academic Press in 1977. Republished by Dover. • Matlab Toolbox implementing parts of Gerald J. Bierman's Estimation Subroutine Library: UD / UDU' and LD / LDL' factorization with associated time and measurement updates making up the Kalman filter. • Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping: Vehicle moving in 1D, 2D and 3D • The Kalman Filter in Reproducing Kernel Hilbert Spaces A comprehensive introduction. • Matlab code to estimate Cox–Ingersoll–Ross interest rate model with Kalman Filter Archived 2014-02-09 at the Wayback Machine: Corresponds to the paper "estimating and testing exponential-affine term structure models by kalman filter" published by Review of Quantitative Finance and Accounting in 1999. • Online demo of the Kalman Filter. Demonstration of Kalman Filter (and other data assimilation methods) using twin experiments. • Botella, Guillermo; Martín h., José Antonio; Santos, Matilde; Meyer-Baese, Uwe (2011). "FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision". Sensors. 11 (12): 1251–1259. Bibcode:2011Senso..11.8164B. doi:10.3390/s110808164. PMC 3231703. PMID 22164069. • Examples and how-to on using Kalman Filters with MATLAB A Tutorial on Filtering and Estimation • Explaining Filtering (Estimation) in One Hour, Ten Minutes, One Minute, and One Sentence by Yu-Chi Ho • Simo Särkkä (2013). "Bayesian Filtering and Smoothing". Cambridge University Press. Full text available on author's webpage https://users.aalto.fi/~ssarkka/. Control theory Branches • Adaptive control • Control Theory • Digital control • Energy-shaping control • Fuzzy control • Hybrid control • Intelligent control • Model predictive control • Multivariable control • Neural control • Nonlinear control • Optimal control • Real-time control • Robust control • Stochastic control System Properties • Bode plot • Block diagram • Closed-loop transfer function • Controllability • Fourier transform • Frequency response • Laplace transform • Negative feedback • Observability • Performance • Positive feedback • Root Locus Method • Servomechanism • Signal-flow graph • State space representation • Stability Theory • Steady State Analysis & design • System Dynamics • Transfer function Digital Control • Discrete-time signal • Digital signal processing • Quantization • Real Time Software • Sampled Data • System identification • Z Transform Advanced Techniques • Artificial neural network • Coefficient diagram method • Control reconfiguration • Distributed parameter systems • Fractional-order control • Fuzzy logic • H-infinity loop-shaping • Hankel singular value • Kalman filter • Krener's theorem • Least squares • Lyapunov stability • Minor loop feedback • Perceptual control theory • State observer • Vector control Controllers • Embedded controller • Closed-loop controller • Lead-lag compensator • Numerical control • PID controller • Programmable logic controller Control Applications • Automation and Remote Control • Distributed Control System • Electric motors • Industrial Control Systems • Mechatronics • Motion control • Process Control • Robotics • Supervisory control (SCADA) Satellite navigation systems Operational • BeiDou • DORIS • Galileo • GLONASS • GPS / NavStar • IRNSS / NAVIC Historical • BDS / BeiDou-1 • Transit • Timation • Tsiklon GNSS augmentation • EGNOS • GAGAN • GPS·C (retired) • JPALS • LAAS • MSAS • NTRIP • QZSS / Michibiki • SouthPAN • StarFire • WAAS • SDCM Related topics • GNSS reflectometry • Kalman filter • United Kingdom Global Navigation Satellite System • Wavelet Authority control: National • Germany • Israel • United States • Japan • Czech Republic
Wikipedia
Stratonovich integral In stochastic processes, the Stratonovich integral or Fisk–Stratonovich integral (developed simultaneously by Ruslan Stratonovich and Donald Fisk) is a stochastic integral, the most common alternative to the Itô integral. Although the Itô integral is the usual choice in applied mathematics, the Stratonovich integral is frequently used in physics. In some circumstances, integrals in the Stratonovich definition are easier to manipulate. Unlike the Itô calculus, Stratonovich integrals are defined such that the chain rule of ordinary calculus holds. Perhaps the most common situation in which these are encountered is as the solution to Stratonovich stochastic differential equations (SDEs). These are equivalent to Itô SDEs and it is possible to convert between the two whenever one definition is more convenient. Definition The Stratonovich integral can be defined in a manner similar to the Riemann integral, that is as a limit of Riemann sums. Suppose that $W:[0,T]\times \Omega \to \mathbb {R} $ is a Wiener process and $X:[0,T]\times \Omega \to \mathbb {R} $ is a semimartingale adapted to the natural filtration of the Wiener process. Then the Stratonovich integral $\int _{0}^{T}X_{t}\circ \mathrm {d} W_{t}$ is a random variable $:\Omega \to \mathbb {R} $ :\Omega \to \mathbb {R} } defined as the limit in mean square of[1] $\sum _{i=0}^{k-1}{X_{t_{i+1}}+X_{t_{i}} \over 2}\left(W_{t_{i+1}}-W_{t_{i}}\right)$ as the mesh of the partition $0=t_{0}<t_{1}<\dots <t_{k}=T$ of $[0,T]$ tends to 0 (in the style of a Riemann–Stieltjes integral). Calculation Many integration techniques of ordinary calculus can be used for the Stratonovich integral, e.g.: if $f:\mathbb {R} \to \mathbb {R} $ is a smooth function, then $\int _{0}^{T}f'(W_{t})\circ \mathrm {d} W_{t}=f(W_{T})-f(W_{0})$ and more generally, if $f:\mathbb {R} \times \mathbb {R} \to \mathbb {R} $ is a smooth function, then $\int _{0}^{T}{\partial f \over \partial W}(W_{t},t)\circ \mathrm {d} W_{t}+\int _{0}^{T}{\partial f \over \partial t}(W_{t},t)\,\mathrm {d} t=f(W_{T},T)-f(W_{0},0).$ This latter rule is akin to the chain rule of ordinary calculus. Numerical methods Stochastic integrals can rarely be solved in analytic form, making stochastic numerical integration an important topic in all uses of stochastic integrals. Various numerical approximations converge to the Stratonovich integral, and variations of these are used to solve Stratonovich SDEs (Kloeden & Platen 1992). Note however that the most widely used Euler scheme (the Euler–Maruyama method) for the numeric solution of Langevin equations requires the equation to be in Itô form.[2] Differential notation If $X_{t},Y_{t}$, and $Z_{t}$ are stochastic processes such that $X_{T}-X_{0}=\int _{0}^{T}Y_{t}\circ \mathrm {d} W_{t}+\int _{0}^{T}Z_{t}\,\mathrm {d} t$ for all $T>0$, we also write $\mathrm {d} X=Y\circ \mathrm {d} W+Z\,\mathrm {d} t.$ This notation is often used to formulate stochastic differential equations (SDEs), which are really equations about stochastic integrals. It is compatible with the notation from ordinary calculus, for instance $\mathrm {d} (t^{2}\,W^{3})=3t^{2}W^{2}\circ \mathrm {d} W+2tW^{3}\,\mathrm {d} t.$ Comparison with the Itô integral Main article: Itô calculus The Itô integral of the process $X$ with respect to the Wiener process $W$ is denoted by $\int _{0}^{T}X_{t}\,\mathrm {d} W_{t}$ (without the circle). For its definition, the same procedure is used as above in the definition of the Stratonovich integral, except for choosing the value of the process $X$ at the left-hand endpoint of each subinterval, i.e., $X_{t_{i}}$ in place of $(X_{t_{i+1}}+X_{t_{i}})/2$ This integral does not obey the ordinary chain rule as the Stratonovich integral does; instead one has to use the slightly more complicated Itô's lemma. Conversion between Itô and Stratonovich integrals may be performed using the formula $\int _{0}^{T}f(W_{t},t)\circ \mathrm {d} W_{t}={\frac {1}{2}}\int _{0}^{T}{\partial f \over \partial W}(W_{t},t)\,\mathrm {d} t+\int _{0}^{T}f(W_{t},t)\,\mathrm {d} W_{t},$ where $f$ is any continuously differentiable function of two variables $W$ and $t$ and the last integral is an Itô integral (Kloeden & Platen 1992, p. 101). Langevin equations exemplify the importance of specifying the interpretation (Stratonovich or Itô) in a given problem. Suppose $X_{t}$ is a time-homogeneous Itô diffusion with continuously differentiable diffusion coefficient $\sigma $, i.e. it satisfies the SDE $\mathrm {d} X_{t}=\mu (X_{t})\,\mathrm {d} t+\sigma (X_{t})\,\mathrm {d} W_{t}$. In order to get the corresponding Stratonovich version, the term $\sigma (X_{t})\,\mathrm {d} W_{t}$ (in Itô interpretation) should translate to $\sigma (X_{t})\circ \mathrm {d} W_{t}$ (in Stratonovich interpretation) as $\int _{0}^{T}\sigma (X_{t})\circ \mathrm {d} W_{t}={\frac {1}{2}}\int _{0}^{T}{\frac {d\sigma }{dx}}(X_{t})\sigma (X_{t})\,\mathrm {d} t+\int _{0}^{T}\sigma (X_{t})\,\mathrm {d} W_{t}.$ Obviously, if $\sigma $ is independent of $X_{t}$, the two interpretations will lead to the same form for the Langevin equation. In that case, the noise term is called "additive" (since the noise term $dW_{t}$ is multiplied by only a fixed coefficient). Otherwise, if $\sigma =\sigma (X_{t})$, the Langevin equation in Itô form may in general differ from that in Stratonovich form, in which case the noise term is called multiplicative (i.e., the noise $dW_{t}$ is multiplied by a function of $X_{t}$ that is $\sigma (X_{t})$). More generally, for any two semimartingales $X$ and $Y$ $\int _{0}^{T}X_{s-}\circ \mathrm {d} Y_{s}=\int _{0}^{T}X_{s-}\,\mathrm {d} Y_{s}+{\frac {1}{2}}[X,Y]_{T}^{c},$ where $[X,Y]_{T}^{c}$ is the continuous part of the covariation. Stratonovich integrals in applications The Stratonovich integral lacks the important property of the Itô integral, which does not "look into the future". In many real-world applications, such as modelling stock prices, one only has information about past events, and hence the Itô interpretation is more natural. In financial mathematics the Itô interpretation is usually used. In physics, however, stochastic integrals occur as the solutions of Langevin equations. A Langevin equation is a coarse-grained version of a more microscopic model; depending on the problem in consideration, Stratonovich or Itô interpretation or even more exotic interpretations such as the isothermal interpretation, are appropriate. The Stratonovich interpretation is the most frequently used interpretation within the physical sciences. The Wong–Zakai theorem states that physical systems with non-white noise spectrum characterized by a finite noise correlation time $\tau $ can be approximated by a Langevin equations with white noise in Stratonovich interpretation in the limit where $\tau $ tends to zero. Because the Stratonovich calculus satisfies the ordinary chain rule, stochastic differential equations (SDEs) in the Stratonovich sense are more straightforward to define on differentiable manifolds, rather than just on $\mathbb {R} ^{n}$. The tricky chain rule of the Itô calculus makes it a more awkward choice for manifolds. Stratonovich interpretation and supersymmetric theory of SDEs Main article: Supersymmetric theory of stochastic dynamics In the supersymmetric theory of SDEs, one considers the evolution operator obtained by averaging the pullback induced on the exterior algebra of the phase space by the stochastic flow determined by an SDE. In this context, it is then natural to use the Stratonovich interpretation of SDEs. Notes 1. Gardiner (2004), p. 98 and the comment on p. 101 2. Perez-Carrasco R.; Sancho J.M. (2010). "Stochastic algorithms for discontinuous multiplicative white noise" (PDF). Phys. Rev. E. 81 (3): 032104. Bibcode:2010PhRvE..81c2104P. doi:10.1103/PhysRevE.81.032104. PMID 20365796. References • Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin. ISBN 3-540-04758-1. • Gardiner, Crispin W. (2004). Handbook of Stochastic Methods (3 ed.). Springer, Berlin Heidelberg. ISBN 3-540-20882-8. • Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical finance: The early years, 1880–1970". IMS Lecture Notes Monograph. 45: 1–17. CiteSeerX 10.1.1.114.632. • Kloeden, Peter E.; Platen, Eckhard (1992). Numerical solution of stochastic differential equations. Applications of Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-3-540-54062-5..
Wikipedia
Streamlines, streaklines, and pathlines Streamlines, streaklines and pathlines are field lines in a fluid flow. They differ only when the flow changes with time, that is, when the flow is not steady.[1] [2] Considering a velocity vector field in three-dimensional space in the framework of continuum mechanics, we have that: • Streamlines are a family of curves whose tangent vectors constitute the velocity vector field of the flow. These show the direction in which a massless fluid element will travel at any point in time.[3] • Streaklines are the loci of points of all the fluid particles that have passed continuously through a particular spatial point in the past. Dye steadily injected into the fluid at a fixed point extends along a streakline. • Pathlines are the trajectories that individual fluid particles follow. These can be thought of as "recording" the path of a fluid element in the flow over a certain period. The direction the path takes will be determined by the streamlines of the fluid at each moment in time. • Timelines are the lines formed by a set of fluid particles that were marked at a previous instant in time, creating a line or a curve that is displaced in time as the particles move. By definition, different streamlines at the same instant in a flow do not intersect, because a fluid particle cannot have two different velocities at the same point. However, pathlines are allowed to intersect themselves or other pathlines (except the starting and end points of the different pathlines, which need to be distinct). Streaklines can also intersect themselves and other streaklines. Streamlines and timelines provide a snapshot of some flowfield characteristics, whereas streaklines and pathlines depend on the full time-history of the flow. However, often sequences of timelines (and streaklines) at different instants—being presented either in a single image or with a video stream—may be used to provide insight in the flow and its history. If a line, curve or closed curve is used as start point for a continuous set of streamlines, the result is a stream surface. In the case of a closed curve in a steady flow, fluid that is inside a stream surface must remain forever within that same stream surface, because the streamlines are tangent to the flow velocity. A scalar function whose contour lines define the streamlines is known as the stream function. Dye line may refer either to a streakline: dye released gradually from a fixed location during time; or it may refer to a timeline: a line of dye applied instantaneously at a certain moment in time, and observed at a later instant. Mathematical description Streamlines Streamlines are defined by[4] ${d{\vec {x}}_{S} \over ds}\times {\vec {u}}({\vec {x}}_{S})=0,$ where "$\times $" denotes the vector cross product and ${\vec {x}}_{S}(s)$ is the parametric representation of just one streamline at one moment in time. If the components of the velocity are written ${\vec {u}}=(u,v,w),$ and those of the streamline as ${\vec {x}}_{S}=(x_{S},y_{S},z_{S}),$ we deduce[4] ${dx_{S} \over u}={dy_{S} \over v}={dz_{S} \over w},$ which shows that the curves are parallel to the velocity vector. Here $s$ is a variable which parametrizes the curve $s\mapsto {\vec {x}}_{S}(s).$ Streamlines are calculated instantaneously, meaning that at one instance of time they are calculated throughout the fluid from the instantaneous flow velocity field. A streamtube consists of a bundle of streamlines, much like communication cable. The equation of motion of a fluid on a streamline for a flow in a vertical plane is:[5] ${\frac {\partial c}{\partial t}}+c{\frac {\partial c}{\partial s}}=\nu {\frac {\partial ^{2}c}{\partial r^{2}}}-{\frac {1}{\rho }}{\frac {\partial p}{\partial s}}-g{\frac {\partial z}{\partial s}}$ The flow velocity in the direction $s$ of the streamline is denoted by $c$. $r$ is the radius of curvature of the streamline. The density of the fluid is denoted by $\rho $ and the kinematic viscosity by $\nu $. ${\frac {\partial p}{\partial s}}$ is the pressure gradient and ${\frac {\partial c}{\partial s}}$ the velocity gradient along the streamline. For a steady flow, the time derivative of the velocity is zero: ${\frac {\partial c}{\partial t}}=0$. $g$ denotes the gravitational acceleration. Pathlines Pathlines are defined by ${\begin{cases}\displaystyle {\frac {d{\vec {x}}_{P}}{dt}}(t)={\vec {u}}_{P}({\vec {x}}_{P}(t),t)\\[1.2ex]{\vec {x}}_{P}(t_{0})={\vec {x}}_{P0}\end{cases}}$ The suffix $P$ indicates that we are following the motion of a fluid particle. Note that at point ${\vec {x}}_{P}$ the curve is parallel to the flow velocity vector ${\vec {u}}$, where the velocity vector is evaluated at the position of the particle ${\vec {x}}_{P}$ at that time $t$. Streaklines Streaklines can be expressed as, ${\begin{cases}\displaystyle {\frac {d{\vec {x}}_{str}}{dt}}={\vec {u}}_{P}({\vec {x}}_{str},t)\\[1.2ex]{\vec {x}}_{str}(t=\tau _{P})={\vec {x}}_{P0}\end{cases}}$ where, ${\vec {u}}_{P}({\vec {x}},t)$ is the velocity of a particle $P$ at location ${\vec {x}}$ and time $t$. The parameter $\tau _{P}$, parametrizes the streakline ${\vec {x}}_{str}(t,\tau _{P})$ and $t_{0}\leq \tau _{P}\leq t$, where $t$ is a time of interest. Steady flows In steady flow (when the velocity vector-field does not change with time), the streamlines, pathlines, and streaklines coincide. This is because when a particle on a streamline reaches a point, $a_{0}$, further on that streamline the equations governing the flow will send it in a certain direction ${\vec {x}}$. As the equations that govern the flow remain the same when another particle reaches $a_{0}$ it will also go in the direction ${\vec {x}}$. If the flow is not steady then when the next particle reaches position $a_{0}$ the flow would have changed and the particle will go in a different direction. This is useful, because it is usually very difficult to look at streamlines in an experiment. However, if the flow is steady, one can use streaklines to describe the streamline pattern. Frame dependence Streamlines are frame-dependent. That is, the streamlines observed in one inertial reference frame are different from those observed in another inertial reference frame. For instance, the streamlines in the air around an aircraft wing are defined differently for the passengers in the aircraft than for an observer on the ground. In the aircraft example, the observer on the ground will observe unsteady flow, and the observers in the aircraft will observe steady flow, with constant streamlines. When possible, fluid dynamicists try to find a reference frame in which the flow is steady, so that they can use experimental methods of creating streaklines to identify the streamlines. Application Knowledge of the streamlines can be useful in fluid dynamics. The curvature of a streamline is related to the pressure gradient acting perpendicular to the streamline. The center of curvature of the streamline lies in the direction of decreasing radial pressure. The magnitude of the radial pressure gradient can be calculated directly from the density of the fluid, the curvature of the streamline and the local velocity. Dye can be used in water, or smoke in air, in order to see streaklines, from which pathlines can be calculated. Streaklines are identical to streamlines for steady flow. Further, dye can be used to create timelines.[6] The patterns guide design modifications, aiming to reduce the drag. This task is known as streamlining, and the resulting design is referred to as being streamlined. Streamlined objects and organisms, like airfoils, streamliners, cars and dolphins are often aesthetically pleasing to the eye. The Streamline Moderne style, a 1930s and 1940s offshoot of Art Deco, brought flowing lines to architecture and design of the era. The canonical example of a streamlined shape is a chicken egg with the blunt end facing forwards. This shows clearly that the curvature of the front surface can be much steeper than the back of the object. Most drag is caused by eddies in the fluid behind the moving object, and the objective should be to allow the fluid to slow down after passing around the object, and regain pressure, without forming eddies. The same terms have since become common vernacular to describe any process that smooths an operation. For instance, it is common to hear references to streamlining a business practice, or operation. See also • Drag coefficient • Elementary flow • Equipotential surface • Flow visualization • Flow velocity • Scientific visualization • Seeding (fluid dynamics) • Stream function • Streamsurface • Streamlet (scientific visualization) Notes and references Notes 1. Batchelor, G. (2000). Introduction to Fluid Mechanics. 2. Kundu P and Cohen I. Fluid Mechanics. 3. "Definition of Streamlines". www.grc.nasa.gov. Archived from the original on 18 January 2017. Retrieved 26 April 2018. 4. Granger, R.A. (1995). Fluid Mechanics. Dover Publications. ISBN 0-486-68356-7., pp. 422–425. 5. tec-science (2020-04-22). "Equation of motion of a fluid on a streamline". tec-science. Retrieved 2020-05-07. 6. "Flow visualisation". National Committee for Fluid Mechanics Films (NCFMF). Archived from the original (RealMedia) on 2006-01-03. Retrieved 2009-04-20. References • Faber, T.E. (1995). Fluid Dynamics for Physicists. Cambridge University Press. ISBN 0-521-42969-2. External links • Streamline illustration • Tutorial - Illustration of Streamlines, Streaklines and Pathlines of a Velocity Field(with applet) • Joukowsky Transform Interactive WebApp
Wikipedia
Streamline diffusion Streamline diffusion, given an advection-diffusion equation, refers to all diffusion going on along the advection direction.[1] References 1. Roos, Hans-Görg; Zarin, Helena (2003-01-01). "The streamline-diffusion method for a convection–diffusion problem with a point source". Journal of Computational and Applied Mathematics. 150 (1): 109–128. Bibcode:2003JCoAM.150..109R. doi:10.1016/S0377-0427(02)00568-X. ISSN 0377-0427.
Wikipedia
Streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space (i.e. $\mathbb {P} _{k}-\mathbb {P} _{k}$) by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.[1][2] The finite element (FE) numerical computation of incompressible Navier–Stokes equations (NS) suffers from two main sources of numerical instabilities arising from the associated Galerkin problem.[1] Equal order finite elements for pressure and velocity, (for example, $\mathbb {P} _{k}-\mathbb {P} _{k},\;\forall k\geq 0$), do not satisfy the inf-sup condition and leads to instability on the discrete pressure (also called spurious pressure).[2] Moreover, the advection term in the Navier–Stokes equations can produce oscillations in the velocity field (also called spurious velocity).[2] Such spurious velocity oscillations become more evident for advection-dominated (i.e., high Reynolds number $Re$) flows.[2] To control instabilities arising from inf-sup condition and convection dominated problem, pressure-stabilizing Petrov–Galerkin (PSPG) stabilization along with Streamline-Upwind Petrov-Galerkin (SUPG) stabilization can be added to the NS Galerkin formulation.[1][2] The incompressible Navier–Stokes equations for a Newtonian fluid Let $\Omega \subset \mathbb {R} ^{3}$ be the spatial fluid domain with a smooth boundary $\partial \Omega \equiv \Gamma $, where $\Gamma =\Gamma _{N}\cup \Gamma _{D}$ with $\Gamma _{D}$ the subset of $\Gamma $ in which the essential (Dirichlet) boundary conditions are set, while $\Gamma _{N}$ the portion of the boundary where natural (Neumann) boundary conditions have been considered. Moreover, $\Gamma _{N}=\Gamma \setminus \Gamma _{D}$, and $\Gamma _{N}\cap \Gamma _{D}=\emptyset $. Introducing an unknown velocity field $\mathbf {u} (\mathbf {x} ,t):\Omega \times [0,T]\rightarrow \mathbb {R} ^{3}$ and an unknown pressure field $p(\mathbf {x} ,t):\Omega \times [0,T]\rightarrow \mathbb {R} $, in absence of body forces, the incompressible Navier–Stokes (NS) equations read[3] ${\begin{cases}{\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -{\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {0} &{\text{in }}\Omega \times (0,T],\\\nabla \cdot {\mathbf {u} }=0&{\text{in }}\Omega \times (0,T],\\\mathbf {u} =\mathbf {g} &{\text{on }}\Gamma _{D}\times (0,T],\\{\boldsymbol {\sigma }}(\mathbf {u} ,p)\mathbf {\hat {n}} =\mathbf {h} &{\text{on }}\Gamma _{N}\times (0,T],\\\mathbf {u} (\mathbf {x} ,0)=\mathbf {u} _{0}(\mathbf {x} )&{\text{in }}\Omega \times \{0\},\end{cases}}$ where $\mathbf {\hat {n}} $ is the outward directed unit normal vector to $\Gamma _{N}$, ${\boldsymbol {\sigma }}$ is the Cauchy stress tensor, $\rho $ is the fluid density , and $\nabla $ and $\nabla \cdot $ are the usual gradient and divergence operators. The functions $\mathbf {g} $ and $\mathbf {h} $ indicate suitable Dirichlet and Neumann data, respectively, while $\mathbf {u} _{0}$ is the known initial field solution at time $t=0$. For a Newtonian fluid, the Cauchy stress tensor ${\boldsymbol {\sigma }}$ depends linearly on the components of the strain rate tensor:[3] ${\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu \mathbf {S} (\mathbf {u} ),$ where $\mu $ is the dynamic viscosity of the fluid (taken to be a known constant) and $\mathbf {I} $ is the second order identity tensor, while $\mathbf {S} (\mathbf {u} )$ is the strain rate tensor $\mathbf {S} (\mathbf {u} )={\frac {1}{2}}{\big [}\nabla \mathbf {u} +(\nabla \mathbf {u} )^{T}{\big ]}.$ The first of the NS equations represents the balance of the momentum and the second one the conservation of the mass, also called continuity equation (or incompressible constraint).[3] Vectorial functions $\mathbf {u} _{0}$, $\mathbf {g} $, and $\mathbf {h} $ are assigned. Hence, the strong formulation of the incompressible Navier–Stokes equations for a constant density, Newtonian and homogeneous fluid can be written as:[3] Find, $\forall t\in (0,T]$, velocity $\mathbf {u} (\mathbf {x} ,t)$ and pressure $p(\mathbf {x} ,t)$ such that: ${\begin{cases}{\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla {\hat {p}}-2\nu \nabla \cdot \mathbf {S} (\mathbf {u} )=\mathbf {0} &{\text{in }}\Omega \times (0,T],\\\nabla \cdot {\mathbf {u} }=0&{\text{in }}\Omega \times (0,T],\\\left(-{\hat {p}}\mathbf {I} +2\nu \mathbf {S} (\mathbf {u} )\right)\mathbf {\hat {n}} =\mathbf {h} &{\text{on }}\Gamma _{N}\times (0,T],\\\mathbf {u} =\mathbf {g} &{\text{on }}\Gamma _{D}\times (0,T]\;,\\\mathbf {u} (\mathbf {x} ,0)=\mathbf {u} _{0}(\mathbf {x} )&{\text{in }}\Omega \times \{0\},\end{cases}}$ where, $\nu ={\frac {\mu }{\rho }}$ is the kinematic viscosity, and ${\hat {p}}={\frac {p}{\rho }}$ is the pressure rescaled by density (however, for the sake of clearness, the hat on pressure variable will be neglect in what follows). In the NS equations, the Reynolds number shows how important is the non linear term, $(\mathbf {u} \cdot \nabla )\mathbf {u} $, compared to the dissipative term, $\nu \nabla \cdot \mathbf {S} (\mathbf {u} ):$ [4] ${\frac {(\mathbf {u} \cdot \nabla )\mathbf {u} }{\nu \nabla \cdot \mathbf {S} (\mathbf {u} )}}\approx {\frac {\frac {U^{2}}{L}}{\nu {\frac {U}{L^{2}}}}}={\frac {UL}{\nu }}=\mathrm {Re} .$ The Reynolds number is a measure of the ratio between the advection convection terms, generated by inertial forces in the flow velocity, and the diffusion term specific of fluid viscous forces.[4] Thus, $\mathrm {Re} $ can be used to discriminate between advection-convection dominated flow and diffusion dominated one.[4] Namely: • for "low" $\mathrm {Re} $, viscous forces dominate and we are in the viscous fluid situation (also named Laminar Flow),[4] • for "high" $\mathrm {Re} $, inertial forces prevail and a slightly viscous fluid with high velocity emerges (also named Turbulent Flow).[4] The weak formulation of the Navier–Stokes equations The weak formulation of the strong formulation of the NS equations is obtained by multiplying the first two NS equations by test functions $\mathbf {v} $ and $q$, respectively, belonging to suitable function spaces, and integrating these equation all over the fluid domain $\Omega $.[3] As a consequence:[3] ${\begin{aligned}&\int _{\Omega }{\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} \,d\Omega +\int _{\Omega }(\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} \,d\Omega +\int _{\Omega }\nabla p\cdot \mathbf {v} \,d\Omega \,-\int _{\Omega }2\nu \nabla \cdot \mathbf {S} (\mathbf {u} )\cdot \mathbf {v} \,d\Omega =0,\\&\int _{\Omega }\nabla \cdot \mathbf {u} \,q\,d\Omega =0.\end{aligned}}$ By summing up the two equations and performing integration by parts for pressure ($\nabla p$) and viscous ($\nabla \cdot \mathbf {S} (\mathbf {u} )$) term:[3] $\int _{\Omega }{\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} \,d\Omega +\int _{\Omega }(\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} \,d\Omega \,+\int _{\Omega }\nabla \cdot \mathbf {u} \,q\,d\Omega -\int _{\Omega }p\nabla \cdot \mathbf {v} \,d\Omega +\int _{\partial \Omega }p\mathbf {v} \cdot \mathbf {\hat {n}} \,d\Gamma \,+\int _{\Omega }2\nu \mathbf {S} (\mathbf {u} ):\nabla \mathbf {v} \,d\Omega -\int _{\partial \Omega }2\nu \mathbf {S} (\mathbf {u} )\cdot \mathbf {v} \cdot \mathbf {\hat {n}} \,d\Gamma \,=0.$ Regarding the choice of the function spaces, it's enough that $p$ and $q$, $\mathbf {u} $ and $\mathbf {v} $, and their derivative, $\nabla \mathbf {u} $ and $\nabla \mathbf {v} $ are square-integrable functionss in order to have sense in the integrals that appear in the above formulation.[3] Hence,[3] ${\begin{aligned}&{\mathcal {Q}}=L^{2}(\Omega )=\left\{q\in \Omega {\text{ s.t. }}\Vert q\Vert _{L^{2}}={\sqrt {\int _{\Omega }{\vert q\vert ^{2}\ d\Omega }}}<\infty \right\},\\&{\mathcal {V}}=\{\mathbf {v} \in [L^{2}(\Omega )]^{3}{\text{ and }}\nabla \mathbf {v} \in [L^{2}(\Omega )]^{3\times 3},\,\mathbf {v} |_{\Gamma _{D}}=\mathbf {g} \},\\&{\mathcal {V}}_{0}=\{\mathbf {v} \in {\mathcal {V}}{\text{ s.t. }}\mathbf {v} |_{\Gamma _{D}}=\mathbf {0} \}.\end{aligned}}$ Having specified the function spaces ${\mathcal {V}}$, ${\mathcal {V}}_{0}$ and ${\mathcal {Q}}$, and by applying the boundary conditions, the boundary terms can be rewritten as[3] $\int _{\Gamma _{D}\cup \Gamma _{N}}p\mathbf {v} \cdot \mathbf {\hat {n}} \,d\Gamma +\int _{\Gamma _{D}\cup \Gamma _{N}}-2\nu S(\mathbf {u} )\cdot \mathbf {v} \cdot \mathbf {\hat {n}} \,d\Gamma ,$ where $\partial \Omega =\Gamma _{D}\cup \Gamma _{N}$. The integral terms with $\Gamma _{D}$ vanish because $\mathbf {v} |_{\Gamma _{D}}=\mathbf {0} $, while the term on $\Gamma _{N}$ become $\int _{\Gamma _{N}}[p\mathbf {I} -2\nu S(\mathbf {u} )]\cdot \mathbf {v} \cdot \mathbf {\hat {n}} \,d\Gamma =-\int _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \,d\Gamma ,$ The weak formulation of Navier–Stokes equations reads:[3] Find, for all $t\in (0,T]$, $(\mathbf {u} ,p)\in \{{\mathcal {V}}\times {\mathcal {Q}}\}$, such that $\left({\frac {\partial \mathbf {u} }{\partial t}},\mathbf {v} \right)+c(\mathbf {u} ,\mathbf {u} ,\mathbf {v} )+b(\mathbf {u} ,q)-b(\mathbf {v} ,p)+a(\mathbf {u} ,\mathbf {v} )=f(\mathbf {v} )$ with $\mathbf {u} |_{t=0}=\mathbf {u} _{0}$, where[3] ${\begin{aligned}\left({\frac {\partial \mathbf {u} }{\partial t}},\mathbf {v} \right)&:=\int _{\Omega }{\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} \,d\Omega ,\\b(\mathbf {u} ,q)&:=\int _{\Omega }\nabla \cdot \mathbf {u} \,q\,d\Omega ,\\a(\mathbf {u} ,\mathbf {v} )&:=\int _{\Omega }2\nu \mathbf {S} (\mathbf {u} ):\nabla \mathbf {v} \,d\Omega ,\\c(\mathbf {w} ,\mathbf {u} ,\mathbf {v} )&:=\int _{\Omega }(\mathbf {w} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} \,d\Omega ,\\f(\mathbf {v} )&:=-\int _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \,d\Gamma .\end{aligned}}$ Finite element Galerkin formulation of Navier–Stokes equations In order to numerically solve the NS problem, first the discretization of the weak formulation is performed.[3] Consider a triangulation $\Omega _{h}$, composed by tetrahedra ${\mathcal {T}}_{i}$, with $i=1,\ldots ,N_{\mathcal {T}}$ (where $N_{\mathcal {T}}$ is the total number of tetrahedra), of the domain $\Omega $ and $h$ is the characteristic length of the element of the triangulation.[3] Introducing two families of finite-dimensional sub-spaces ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$, approximations of ${\mathcal {V}}$ and ${\mathcal {Q}}$ respectively, and depending on a discretization parameter $h$, with $\dim {\mathcal {V}}_{h}=N_{V}$ and $\dim {\mathcal {Q}}_{h}=N_{Q}$,[3] ${\mathcal {V}}_{h}\subset {\mathcal {V}}\;\;\;\;\;\;\;\;\;{\mathcal {Q}}_{h}\subset {\mathcal {Q}},$ the discretized-in-space Galerkin problem of the weak NS equation reads:[3] Find, for all $t\in (0,T]$, $(\mathbf {u} _{h},p_{h})\in \{{\mathcal {V}}_{h}\times {\mathcal {Q}}_{h}\}$, such that ${\begin{aligned}&\left({\frac {\partial \mathbf {u} _{h}}{\partial t}},\mathbf {v} _{h}\right)+c(\mathbf {u} _{h},\mathbf {u} _{h},\mathbf {v} _{h})+b(\mathbf {u} _{h},q_{h})-b(\mathbf {v} _{h},p_{h})+a(\mathbf {u} _{h},\mathbf {v} _{h})=f(\mathbf {v} _{h})\\&\;\;\;\;\;\;\;\;\;\;\forall \mathbf {v} _{h}\in {\mathcal {V}}_{0h}\;\;,\;\;\forall q_{h}\in {\mathcal {Q}}_{h},\end{aligned}}$ with $\mathbf {u} _{h}|_{t=0}=\mathbf {u} _{h,0}$, where $\mathbf {g} _{h}$ is the approximation (for example, its interpolant) of $\mathbf {g} $, and ${\mathcal {V}}_{0h}=\{\mathbf {v} _{h}\in {\mathcal {V}}_{h}{\text{ s.t. }}\mathbf {v} _{h}|_{\Gamma _{D}}=\mathbf {0} \}.$ Time discretization of discretized-in-space NS Galerkin problem can be performed, for example, by using the second order Backward Differentiation Formula (BDF2), that is an implicit second order multistep method.[5] Divide uniformly the finite time interval $[0,T]$ into $N_{t}$ time step of size $\delta t$[3] $t_{n}=n\delta t,\;\;\;n=0,1,2,\ldots ,N_{t}\;\;\;\;\;N_{t}={\frac {T}{\delta t}}.$ For a general function $z$, denoted by $z^{n}$ as the approximation of $z(t_{n})$. Thus, the BDF2 approximation of the time derivative reads as follows:[3] $\left({\frac {\partial \mathbf {u} _{h}}{\partial t}}\right)^{n+1}\simeq {\frac {3\mathbf {u} _{h}^{n+1}-4\mathbf {u} _{h}^{n}+\mathbf {u} _{h}^{n-1}}{2\delta t}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{\text{for }}n\geq 1.$ So, the fully discretized in time and space NS Galerkin problem is:[3] Find, for $n=0,1,\ldots ,N_{t}-1$, $(\mathbf {u} _{h}^{n+1},p_{h}^{n+1})\in \{{\mathcal {V}}_{h}\times {\mathcal {Q}}_{h}\}$, such that ${\begin{aligned}\left({\frac {3\mathbf {u} _{h}^{n+1}-4\mathbf {u} _{h}^{n}+\mathbf {u} _{h}^{n-1}}{2\delta t}},\mathbf {v} _{h}\right)&+c(\mathbf {u} _{h}^{*},\mathbf {u} _{h}^{n+1},\mathbf {v} _{h})+b(\mathbf {u} _{h}^{n+1},q_{h})-b(\mathbf {v} _{h},p_{h}^{n+1})+a(\mathbf {u} _{h}^{n+1},\mathbf {v} _{h})=f(\mathbf {v} _{h}),\\&\;\;\;\;\;\;\;\;\;\;\forall \mathbf {v} _{h}\in {\mathcal {V}}_{0h}\;\;,\;\;\forall q_{h}\in {\mathcal {Q}}_{h},\end{aligned}}$ with $\mathbf {u} _{h}^{0}=\mathbf {u} _{h,0}$, and $\mathbf {u} _{h}^{*}$ is a quantity that will be detailed later in this section. The main issue of a fully implicit method for the NS Galerkin formulation is that the resulting problem is still non linear, due to the convective term, $c(\mathbf {u} _{h}^{*},\mathbf {u} _{h}^{n+1},\mathbf {v} _{h})$.[3] Indeed, if $\mathbf {u} _{h}^{*}=\mathbf {u} _{h}^{n+1}$ is put, this choice leads to solve a non-linear system (for example, by means of the Newton or Fixed point algorithm) with a huge computational cost.[3] In order to reduce this cost, it is possible to use a semi-implicit approach with a second order extrapolation for the velocity, $\mathbf {u} _{h}^{*}$, in the convective term:[3] $\mathbf {u} _{h}^{*}=2\mathbf {u} _{h}^{n}-\mathbf {u} _{h}^{n-1}.$ Finite element formulation and the INF-SUP condition Let's define the finite element (FE) spaces of continuous functions, $X_{h}^{r}$ (polynomials of degree $r$ on each element ${\mathcal {T}}_{i}$ of the triangulation) as[3] $X_{h}^{r}=\left\{v_{h}\in C^{0}({\overline {\Omega }}):v_{h}|_{{\mathcal {T}}_{i}}\in \mathbb {P} _{r}\ \forall {\mathcal {T}}_{i}\in \Omega _{h}\right\}\;\;\;\;\;\;\;\;\;r=0,1,2,\ldots ,$ where, $\mathbb {P} _{r}$ is the space of polynomials of degree less than or equal to $r$. Introduce the finite element formulation, as a specific Galerkin problem, and choose ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$ as[3] ${\mathcal {V}}_{h}\equiv [X_{h}^{r}]^{3}\;\;\;\;\;\;\;\;{\mathcal {Q}}_{h}\equiv X_{h}^{s}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;r,s\in \mathbb {N} .$ The FE spaces ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$ need to satisfy the inf-sup condition(or LBB):[6] $\exists \beta _{h}>0\;{\text{ s.t. }}\;\inf _{q_{h}\in {\mathcal {Q}}_{h}}\sup _{\mathbf {v} _{h}\in {\mathcal {V}}_{h}}{\frac {b(q_{h},\mathbf {v} _{h})}{\Vert \mathbf {v} _{h}\Vert _{H^{1}}\Vert q_{h}\Vert _{L^{2}}}}\geq \beta _{h}\;\;\;\;\;\;\;\;\forall h>0,$ with $\beta _{h}>0$, and independent of the mesh size $h.$[6] This property is necessary for the well posedness of the discrete problem and the optimal convergence of the method.[6] Examples of FE spaces satisfying the inf-sup condition are the so named Taylor-Hood pair $\mathbb {P} _{k+1}-\mathbb {P} _{k}$ (with $k\geq 1$), where it can be noticed that the velocity space ${\mathcal {V}}_{h}$ has to be, in some sense, "richer" in comparison to the pressure space ${\mathcal {Q}}_{h}.$[6] Indeed, the inf-sup condition couples the space ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$, and it is a sort of compatibility condition between the velocity and pressure spaces.[6] The equal order finite elements, $\mathbb {P} _{k}-\mathbb {P} _{k}$ ($\forall k$), do not satisfy the inf-sup condition and leads to instability on the discrete pressure (also called spurious pressure).[6] However, $\mathbb {P} _{k}-\mathbb {P} _{k}$ can still be used with additional stabilization terms such as Streamline Upwind Petrov-Galerkin with a Pressure-Stabilizing Petrov-Galerkin term (SUPG-PSPG).[2][1] In order to derive the FE algebraic formulation of the fully discretized Galerkin NS problem, it is necessary to introduce two basis for the discrete spaces ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$[3] $\{{\boldsymbol {\phi }}_{i}(\mathbf {x} )\}_{i=1}^{N_{V}}\;\;\;\;\;\;\{\psi _{k}(\mathbf {x} )\}_{k=1}^{N_{Q}},$ in order to expand our variables as[3] $\mathbf {u} _{h}^{n}=\sum _{j=1}^{N_{V}}U_{j}^{n}{\boldsymbol {\phi }}_{j}(\mathbf {x} ),\;\;\;\;\;\;\;\;\;\;q_{h}^{n}=\sum _{l=1}^{N_{Q}}P_{l}^{n}\psi _{l}(\mathbf {x} ).$ The coefficients, $U_{j}^{n}$ ($j=1,\ldots ,N_{V}$) and $P_{l}^{n}$ ($l=1,\ldots ,N_{Q}$) are called degrees of freedom (d.o.f.) of the finite element for the velocity and pressure field, respectively. The dimension of the FE spaces, $N_{V}$ and $N_{Q}$, is the number of d.o.f, of the velocity and pressure field, respectively. Hence, the total number of d.o.f $N_{d.o.f}$ is $N_{d.o.f}=N_{V}+N_{Q}$.[3] Since the fully discretized Galerkin problem holds for all elements of the space ${\mathcal {V}}_{h}$ and ${\mathcal {Q}}_{h}$, then it is valid also for the basis.[3] Hence, choosing these basis functions as test functions in the fully discretized NS Galerkin problem, and using bilinearity of $a(\cdot ,\cdot )$ and $b(\cdot ,\cdot )$, and trilinearity of $c(\cdot ,\cdot ,\cdot )$, the following linear system is obtained:[3] ${\begin{cases}\displaystyle M{\frac {3\mathbf {U} ^{n+1}-4\mathbf {U} ^{n}+\mathbf {U} ^{n-1}}{2\delta t}}+A\mathbf {U} ^{n+1}+C(\mathbf {U} ^{*})\mathbf {U} ^{n+1}+\displaystyle {B^{T}\mathbf {P} ^{n+1}=\mathbf {F} ^{n}}\\\displaystyle {B\mathbf {U} ^{n+1}=\mathbf {0} }\end{cases}}$ where $M\in \mathbb {R} ^{N_{V}\times N_{V}}$ , $A\in \mathbb {R} ^{N_{V}\times N_{V}}$, $C(\mathbf {U} ^{*})\in \mathbb {R} ^{N_{V}\times N_{V}}$, $B\in \mathbb {R} ^{N_{Q}\times N_{V}}$, and $F\in \mathbb {R} ^{N_{V}}$ are given by[3] ${\begin{aligned}&M_{ij}=\int _{\Omega }{\boldsymbol {\phi }}_{j}\cdot {\boldsymbol {\phi }}_{i}d\Omega \\&A_{ij}=a({\boldsymbol {\phi }}_{j},{\boldsymbol {\phi }}_{i})\\&C_{ij}(\mathbf {u} ^{*})=c(\mathbf {u} ^{*},{\boldsymbol {\phi }}_{j},{\boldsymbol {\phi }}_{i}),\\&B_{kj}=b({\boldsymbol {\phi }}_{j},\psi _{k}),\\&F_{i}=f({\boldsymbol {\phi }}_{i})\end{aligned}}$ and $\mathbf {U} $ and $\mathbf {P} $ are the unknown vectors[3] $\mathbf {U} ^{n}={\Big (}U_{1}^{n},\ldots ,U_{N_{V}}^{n}{\Big )}^{T},\;\;\;\;\;\;\;\;\;\;\;\;\mathbf {P} ^{n}={\Big (}P_{1}^{n},\ldots ,P_{N_{Q}}^{n}{\Big )}^{T}.$ Problem is completed by an initial condition on the velocity $\mathbf {U} (0)=\mathbf {U} _{0}$. Moreover, using the semi-implicit treatment $\mathbf {U} ^{*}=2\mathbf {U} ^{n}-\mathbf {U} ^{n-1}$, the trilinear term $c(\cdot ,\cdot ,\cdot )$ becomes bilinear, and the corresponding matrix is[3] $C_{ij}=c(\mathbf {u} ^{*},{\boldsymbol {\phi }}_{j},{\boldsymbol {\phi }}_{i})=\int _{\Omega }(\mathbf {u} ^{*}\cdot \nabla ){\boldsymbol {\phi }}_{j}\cdot {\boldsymbol {\phi }}_{i}\,d\Omega ,$ Hence, the linear system can be written in a single monolithic matrix ($\Sigma $, also called monolithic NS matrix) of the form[3] ${\begin{bmatrix}K&B^{T}\\B&0\end{bmatrix}}{\begin{bmatrix}\mathbf {U} ^{n+1}\\\mathbf {P} ^{n+1}\end{bmatrix}}={\begin{bmatrix}\mathbf {F} ^{n}+{\frac {1}{2\delta t}}M(4\mathbf {U} ^{n}-\mathbf {U} ^{n-1})\\\mathbf {0} \end{bmatrix}},\;\;\;\;\;\Sigma ={\begin{bmatrix}K&B^{T}\\B&0\end{bmatrix}}.$ where $ K={\frac {3}{2\delta t}}M+A+C(U^{*})$. Streamline upwind Petrov–Galerkin formulation for incompressible Navier–Stokes equations NS equations with finite element formulation suffer from two source of numerical instability, due to the fact that: • NS is a convection dominated problem, which means "large" $Re$, where numerical oscillations in the velocity field can occur (spurious velocity); • FE spaces $\mathbb {P} _{k}-\mathbb {P} _{k}(\forall k)$ are unstable combinations of velocity and pressure finite element spaces, that do not satisfy the inf-sup condition, and generates numerical oscillations in the pressure field (spurious pressure). To control instabilities arising from inf-sup condition and convection dominated problem, Pressure-Stabilizing Petrov–Galerkin(PSPG) stabilization along with Streamline-Upwind Petrov–Galerkin (SUPG) stabilization can be added to the NS Galerkin formulation.[1] $s(\mathbf {u} _{h}^{n+1},p_{h}^{n+1};\mathbf {v} _{h},q_{h})=\gamma \sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{\mathcal {T}}\int _{\mathcal {T}}\left[{\mathcal {L}}(\mathbf {u} _{h}^{n+1},p^{n+1})\right]^{T}{\mathcal {L}}_{ss}(\mathbf {v} _{h},q_{h})d{\mathcal {T}},$ where $\gamma >0$ is a positive constant, $\tau _{\mathcal {T}}$ is a stabilization parameter, ${\mathcal {T}}$ is a generic tetrahedron belonging to the finite elements partitioned domain $\Omega _{h}$, ${\mathcal {L}}(\mathbf {u} ,p)$ is the residual of the NS equations.[1] ${\mathcal {L}}(\mathbf {u} ,p)={\begin{bmatrix}{\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p-2\nu \nabla \cdot \mathbf {S} (\mathbf {u} )\\\nabla \cdot \mathbf {u} \end{bmatrix}},$ and ${\mathcal {L}}_{ss}(\mathbf {u} ,p)$ is the skew-symmetric part of the NS equations[1] ${\mathcal {L}}_{ss}(\mathbf {u} ,p)={\begin{bmatrix}(\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p\\\mathbf {0} \end{bmatrix}}.$ The skew-symmetric part of a generic operator ${\mathcal {L}}(\mathbf {u} ,p)$ is the one for which ${\Bigl (}{\mathcal {L}}(\mathbf {u} ,p),(\mathbf {v} ,q){\Bigr )}=-{\Bigl (}(\mathbf {v} ,q),{\mathcal {L}}(\mathbf {u} ,p){\Bigr )}.$[5] Since it is based on the residual of the NS equations, the SUPG-PSPG is a strongly consistent stabilization method.[1] The discretized finite element Galerkin formulation with SUPG-PSPG stabilization can be written as:[1] Find, for all $t=0,1,\ldots ,N_{t}-1,$ $(\mathbf {u} _{h}^{n+1},p_{h}^{n+1})\in \{{\mathcal {V}}_{h}\times {\mathcal {Q}}_{h}\}$, such that ${\begin{aligned}&\left({\frac {3\mathbf {u} _{h}^{n+1}-4\mathbf {u} _{h}^{n}+\mathbf {u} _{h}^{n-1}}{2\delta t}},\mathbf {v} _{h}\right)+c(\mathbf {u} _{h}^{*},\mathbf {u} _{h}^{n+1},\mathbf {v} _{h})+b(\mathbf {u} _{h}^{n+1},q_{h})-b(\mathbf {v} _{h},p_{h}^{n+1})\\&\;\;\;\;\;\;\;\;\;\;\;\;+a(\mathbf {u} _{h}^{n+1},\mathbf {v} _{h})+s(\mathbf {u} _{h}^{n+1},p_{h}^{n+1};\mathbf {v} _{h},q_{h})=0\\\;\;\;\;\;\;\;\;\;\;\forall \mathbf {v} _{h}\in {\mathcal {V}}_{0h}\;\;,\;\;\forall q_{h}\in {\mathcal {Q}}_{h},\end{aligned}}$ with $\mathbf {u} _{h}^{0}=\mathbf {u} _{h,0}$, where[1] ${\begin{aligned}s(\mathbf {u} _{h}^{n+1},p_{h}^{n+1};\mathbf {v} _{h},q_{h})&=\gamma \sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{M,{\mathcal {T}}}\left({\frac {3\mathbf {u} _{h}^{n+1}-4\mathbf {u} _{h}^{n}+\mathbf {u} _{h}^{n-1}}{2\delta t}}+(\mathbf {u} _{h}^{*}\cdot \nabla )\mathbf {u} _{h}^{n+1}+\nabla p_{h}^{n+1}+\right.\\&\left.-2\nu \nabla \cdot \mathbf {S} (\mathbf {u} _{h}^{n+1})\;{\boldsymbol {,}}\;u_{h}^{*}\cdot \nabla \mathbf {v} _{h}+{\frac {\nabla q_{h}}{\rho }}\right)_{\mathcal {T}}+\gamma \sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{C,{\mathcal {T}}}\left(\nabla \cdot \mathbf {u} _{h}^{n+1}{\boldsymbol {,}}\;\nabla \cdot \mathbf {v} _{h}\right)_{\mathcal {T}},\end{aligned}}$ and $\tau _{M,{\mathcal {T}}}$, and $\tau _{C,{\mathcal {T}}}$ are two stabilization parameters for the momentum and the continuity NS equations, respectively. In addition, the notation $ \left(a{\boldsymbol {,}}\;b\right)_{\mathcal {T}}=\int _{\mathcal {T}}ab\;d{\mathcal {T}}$ has been introduced, and $\mathbf {u} _{h}^{*}$ was defined in agreement with the semi-implicit treatment of the convective term.[1] In the previous expression of $s\left(\cdot \,;\cdot \right)$, the term $ \sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{M,{\mathcal {T}}}\left(\nabla p_{h}^{n+1}{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}\right)_{\mathcal {T}},$ is the Brezzi-Pitkaranta stabilization for the inf-sup, while the term $ \sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{M,{\mathcal {T}}}\left(u_{h}^{*}\cdot \nabla \mathbf {u} _{h}^{n+1}{\boldsymbol {,}}\;u_{h}^{*}\cdot \nabla \mathbf {v} _{h}\right)_{\mathcal {T}},$ corresponds to the streamline diffusion term stabilization for large $\mathrm {Re} $.[1] The other terms occur to obtain a strongly consistent stabilization.[1] Regarding the choice of the stabilization parameters $\tau _{M,{\mathcal {T}}}$, and $\tau _{C,{\mathcal {T}}}$:[2] $\tau _{M,{\mathcal {T}}}=\left({\frac {\sigma _{BDF}^{2}}{\delta t^{2}}}+{\frac {\Vert \mathbf {u} \Vert ^{2}}{h_{\mathcal {T}}^{2}}}+C_{k}{\frac {\nu ^{2}}{h_{\mathcal {T}}^{4}}}\right)^{-1/2},\;\;\;\;\;\tau _{C,{\mathcal {T}}}={\frac {h_{\mathcal {T}}^{2}}{\tau _{M,{\mathcal {T}}}}},$ where: $C_{k}=60\cdot 2^{k-2}$ is a constant obtained by an inverse inequality relation (and $k$ is the order of the chosen pair $\mathbb {P} _{k}-\mathbb {P} _{k}$); $\sigma _{BDF}$ is a constant equal to the order of the time discretization; $\delta t$ is the time step; $h_{\mathcal {T}}$ is the "element length" (e.g. the element diameter) of a generic tetrahedra belonging to the partitioned domain $\Omega _{h}$.[7] The parameters $\tau _{M,{\mathcal {T}}}$ and $\tau _{C,{\mathcal {T}}}$ can be obtained by a multidimensional generalization of the optimal value introduced in[8] for the one-dimensional case.[9] Notice that the terms added by the SUPG-PSPG stabilization can be explicitly written as follows[2] ${\begin{aligned}s_{11}^{(1)}={\biggl (}{\frac {3}{2}}{\frac {\mathbf {u} _{h}^{n+1}}{\delta t}}\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\;\;\;\;&\;\;\;\;s_{21}^{(1)}={\biggl (}{\frac {3}{2}}{\frac {\mathbf {u} _{h}^{n+1}}{\delta t}}\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\\s_{11}^{(2)}={\biggl (}\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {u} _{h}^{n+1}\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\;\;\;\;&\;\;\;\;s_{21}^{(2)}={\biggl (}\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {u} _{h}^{n+1}\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\\s_{11}^{(3)}={\biggl (}-2\nu \nabla \cdot \mathbf {S} &(\mathbf {u} _{h}^{n+1})\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\\s_{21}^{(3)}={\biggl (}-2\nu \nabla \cdot \mathbf {S} &(\mathbf {u} _{h}^{n+1})\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\\s_{11}^{(3)}={\biggl (}-2\nu \nabla \cdot \mathbf {S} &(\mathbf {u} _{h}^{n+1})\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\\s_{21}^{(3)}={\biggl (}-2\nu \nabla \cdot \mathbf {S} &(\mathbf {u} _{h}^{n+1})\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\\s_{11}^{(4)}={\biggl (}\nabla \cdot \mathbf {u} _{h}^{n+1}\;{\boldsymbol {,}}&\;\nabla \mathbf {\cdot } \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\end{aligned}}$ ${\begin{aligned}s_{12}={\biggl (}\nabla p_{h}\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\;\;\;\;&\;\;\;\;s_{22}={\biggl (}\nabla p_{h}\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\\f_{v}={\biggl (}{\frac {4\mathbf {u} _{h}^{n}-\mathbf {u} _{h}^{n-1}}{2\delta t}}\;{\boldsymbol {,}}\;\mathbf {u} _{h}^{*}\cdot \nabla \mathbf {v} _{h}{\biggr )}_{\mathcal {T}},\;\;\;\;&\;\;\;\;f_{q}={\biggl (}{\frac {4\mathbf {u} _{h}^{n}-\mathbf {u} _{h}^{n-1}}{2\delta t}}\;{\boldsymbol {,}}\;{\frac {\nabla q_{h}}{\rho }}{\biggr )}_{\mathcal {T}},\end{aligned}}$ where, for the sake of clearness, the sum over the tetrahedra was omitted: all the terms to be intended as $ s_{(I,J)}^{(n)}=\sum _{{\mathcal {T}}\in \Omega _{h}}\tau _{\mathcal {T}}\left(\cdot \,,\cdot \right)_{\mathcal {T}}$; moreover, the indices $I,J$ in $s_{(I,J)}^{(n)}$ refer to the position of the corresponding term in the monolithic NS matrix, $\Sigma $, and $n$ distinguishes the different terms inside each block[2] ${\begin{bmatrix}\Sigma _{11}&\Sigma _{12}\\\Sigma _{21}&\Sigma _{22}\end{bmatrix}}\Longrightarrow {\begin{bmatrix}s_{(11)}^{(1)}+s_{(11)}^{(2)}+s_{(11)}^{(3)}+s_{(11)}^{(4)}&s_{(12)}\\s_{(21)}^{(1)}+s_{(21)}^{(2)}+s_{(21)}^{(3)}&s_{(22)}\end{bmatrix}},$ Hence, the NS monolithic system with the SUPG-PSPG stabilization becomes[2] ${\begin{bmatrix}\ {\tilde {K}}&B^{T}+S_{12}^{T}\\{\widetilde {B}}&S_{22}\end{bmatrix}}{\begin{bmatrix}\mathbf {U} ^{n+1}\\\mathbf {P} ^{n+1}\end{bmatrix}}={\begin{bmatrix}\ \mathbf {F} ^{n}+{\frac {1}{2\delta t}}M(4\mathbf {U} ^{n}-\mathbf {U} ^{n-1})+\mathbf {F} _{v}\\\mathbf {F} _{q}\end{bmatrix}},$ where $ {\tilde {K}}=K+\sum \limits _{i=1}^{4}S_{11}^{(i)}$, and $ {\tilde {B}}=B+\sum \limits _{i=1}^{3}S_{21}^{(i)}$. It is well known that SUPG-PSPG stabilization does not exhibit excessive numerical diffusion if at least second-order velocity elements and first-order pressure elements ($\mathbb {P} _{2}-\mathbb {P} _{1}$) are used.[8] References 1. Tezduyar, T. E. (1 January 1991). "Stabilized Finite Element Formulations for Incompressible Flow Computations††This research was sponsored by NASA-Johnson Space Center (under grant NAG 9-449), NSF (under grant MSM-8796352), U.S. Army (under contract DAAL03-89-C-0038), and the University of Paris VI". Advances in Applied Mechanics. Elsevier. 28: 1–44. doi:10.1016/S0065-2156(08)70153-4. 2. Tobiska, Lutz; Lube, Gert (1 December 1991). "A modified streamline diffusion method for solving the stationary Navier–Stokes equation". Numerische Mathematik. 59 (1): 13–29. doi:10.1007/BF01385768. ISSN 0945-3245. S2CID 123397636. 3. Quarteroni, Alfio (2014). Numerical Models for Differential Problems (2 ed.). Springer-Verlag. ISBN 9788847058835. 4. Pope, Stephen B. (2000). Turbulent Flows by Stephen B. Pope. Cambridge University Press. ISBN 9780521598866. 5. Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2007). Numerical Mathematics (2 ed.). Springer-Verlag. ISBN 9783540346586. 6. Brezzi, Franco; Fortin, Michel (1991). Mixed and Hybrid Finite Element Methods (PDF). Springer Series in Computational Mathematics. Vol. 15. doi:10.1007/978-1-4612-3172-1. ISBN 978-1-4612-7824-5. 7. Forti, Davide; Dedè, Luca (August 2015). "Semi-implicit BDF time discretization of the Navier–Stokes equations with VMS-LES modeling in a High Performance Computing framework". Computers & Fluids. 117: 168–182. doi:10.1016/j.compfluid.2015.05.011. 8. Shih, Rompin; Ray, S. E.; Mittal, Sanjay; Tezduyar, T. E. (1992). "Incompressible flow computations with stabilized bilinear and linear equal-order-interpolation velocity-pressure elements". Computer Methods in Applied Mechanics and Engineering. 95 (2): 221. Bibcode:1992CMAME..95..221T. doi:10.1016/0045-7825(92)90141-6. S2CID 31236394. 9. Kler, Pablo A.; Dalcin, Lisandro D.; Paz, Rodrigo R.; Tezduyar, Tayfun E. (1 February 2013). "SUPG and discontinuity-capturing methods for coupled fluid mechanics and electrochemical transport problems". Computational Mechanics. 51 (2): 171–185. Bibcode:2013CompM..51..171K. doi:10.1007/s00466-012-0712-z. ISSN 1432-0924. S2CID 123650035.
Wikipedia
Strength (mathematical logic) The relative strength of two systems of formal logic can be defined via model theory. Specifically, a logic $\alpha $ is said to be as strong as a logic $\beta $ if every elementary class in $\beta $ is an elementary class in $\alpha $.[1] See also • Abstract logic • Lindström's theorem References 1. Heinz-Dieter Ebbinghaus Extended logics: the general framework in K. J. Barwise and S. Feferman, editors, Model-theoretic logics, 1985 ISBN 0-387-90936-2 page 43 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Strength of a graph In the branch of mathematics called graph theory, the strength of an undirected graph corresponds to the minimum ratio edges removed/components created in a decomposition of the graph in question. It is a method to compute partitions of the set of vertices and detect zones of high concentration of edges, and is analogous to graph toughness which is defined similarly for vertex removal. Strength of a graph: example A graph with strength 2: the graph is here decomposed into three parts, with 4 edges between the parts, giving a ratio of 4/(3-1)=2. Table of graphs and parameters Definitions The strength $\sigma (G)$ of an undirected simple graph G = (V, E) admits the three following definitions: • Let $\Pi $ be the set of all partitions of $V$, and $\partial \pi $ be the set of edges crossing over the sets of the partition $\pi \in \Pi $, then $\displaystyle \sigma (G)=\min _{\pi \in \Pi }{\frac {|\partial \pi |}{|\pi |-1}}$. • Also if ${\mathcal {T}}$ is the set of all spanning trees of G, then $\sigma (G)=\max \left\{\sum _{T\in {\mathcal {T}}}\lambda _{T}\ :\ \forall T\in {\mathcal {T}}\ \lambda _{T}\geq 0{\mbox{ and }}\forall e\in E\ \sum _{T\ni e}\lambda _{T}\leq 1\right\}.$ :\ \forall T\in {\mathcal {T}}\ \lambda _{T}\geq 0{\mbox{ and }}\forall e\in E\ \sum _{T\ni e}\lambda _{T}\leq 1\right\}.} • And by linear programming duality, $\sigma (G)=\min \left\{\sum _{e\in E}y_{e}\ :\ \forall e\in E\ y_{e}\geq 0{\mbox{ and }}\forall T\in {\mathcal {T}}\ \sum _{e\in E}y_{e}\geq 1\right\}.$ :\ \forall e\in E\ y_{e}\geq 0{\mbox{ and }}\forall T\in {\mathcal {T}}\ \sum _{e\in E}y_{e}\geq 1\right\}.} Complexity Computing the strength of a graph can be done in polynomial time, and the first such algorithm was discovered by Cunningham (1985). The algorithm with best complexity for computing exactly the strength is due to Trubin (1993), uses the flow decomposition of Goldberg and Rao (1998), in time $O(\min({\sqrt {m}},n^{2/3})mn\log(n^{2}/m+2))$. Properties • If $\pi =\{V_{1},\dots ,V_{k}\}$ is one partition that maximizes, and for $i\in \{1,\dots ,k\}$, $G_{i}=G/V_{i}$ is the restriction of G to the set $V_{i}$, then $\sigma (G_{k})\geq \sigma (G)$. • The Tutte-Nash-Williams theorem: $\lfloor \sigma (G)\rfloor $ is the maximum number of edge-disjoint spanning trees that can be contained in G. • Contrary to the graph partition problem, the partitions output by computing the strength are not necessarily balanced (i.e. of almost equal size). References • W. H. Cunningham. Optimal attack and reinforcement of a network, J of ACM, 32:549–561, 1985. • A. Schrijver. Chapter 51. Combinatorial Optimization, Springer, 2003. • V. A. Trubin. Strength of a graph and packing of trees and branchings,, Cybernetics and Systems Analysis, 29:379–384, 1993.
Wikipedia
Stress resultants Stress resultants are simplified representations of the stress state in structural elements such as beams, plates, or shells.[1] The geometry of typical structural elements allows the internal stress state to be simplified because of the existence of a "thickness'" direction in which the size of the element is much smaller than in other directions. As a consequence the three traction components that vary from point to point in a cross-section can be replaced with a set of resultant forces and resultant moments. These are the stress resultants (also called membrane forces, shear forces, and bending moment) that may be used to determine the detailed stress state in the structural element. A three-dimensional problem can then be reduced to a one-dimensional problem (for beams) or a two-dimensional problem (for plates and shells). Stress resultants are defined as integrals of stress over the thickness of a structural element. The integrals are weighted by integer powers the thickness coordinate z (or x3). Stress resultants are so defined to represent the effect of stress as a membrane force N (zero power in z), bending moment M (power 1) on a beam or shell (structure). Stress resultants are necessary to eliminate the z dependency of the stress from the equations of the theory of plates and shells. Stress resultants in beams Consider the element shown in the adjacent figure. Assume that the thickness direction is x3. If the element has been extracted from a beam, the width and thickness are comparable in size. Let x2 be the width direction. Then x1 is the length direction. Membrane and shear forces The resultant force vector due to the traction in the cross-section (A) perpendicular to the x1 axis is $\mathbf {F} _{1}=\int _{A}(\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3})\,dA$ where e1, e2, e3 are the unit vectors along x1, x2, and x3, respectively. We define the stress resultants such that $\mathbf {F} _{1}=:N_{11}\mathbf {e} _{1}+V_{2}\mathbf {e} _{2}+V_{3}\mathbf {e} _{3}$ where N11 is the membrane force and V2, V3 are the shear forces. More explicitly, for a beam of height t and width b, $N_{11}=\int _{-b/2}^{b/2}\int _{-t/2}^{t/2}\sigma _{11}\,dx_{3}\,dx_{2}\,.$ Similarly the shear force resultants are ${\begin{bmatrix}V_{2}\\V_{3}\end{bmatrix}}=\int _{-b/2}^{b/2}\int _{-t/2}^{t/2}{\begin{bmatrix}\sigma _{12}\\\sigma _{13}\end{bmatrix}}\,dx_{3}\,dx_{2}\,.$ Bending moments The bending moment vector due to stresses in the cross-section A perpendicular to the x1-axis is given by $\mathbf {M} _{1}=\int _{A}\mathbf {r} \times (\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3})\,dA\quad {\text{where}}\quad \mathbf {r} =x_{2}\,\mathbf {e} _{2}+x_{3}\,\mathbf {e} _{3}\,.$ Expanding this expression we have, $\mathbf {M} _{1}=\int _{A}\left(-x_{2}\sigma _{11}\mathbf {e} _{3}+x_{2}\sigma _{13}\mathbf {e} _{1}+x_{3}\sigma _{11}\mathbf {e} _{2}-x_{3}\sigma _{12}\mathbf {e} _{1}\right)dA=:M_{11}\,\mathbf {e} _{1}+M_{12}\,\mathbf {e} _{2}+M_{13}\,\mathbf {e} _{3}\,.$ We can write the bending moment resultant components as ${\begin{bmatrix}M_{11}\\M_{12}\\M_{13}\end{bmatrix}}:=\int _{-b/2}^{b/2}\int _{-t/2}^{t/2}{\begin{bmatrix}x_{2}\sigma _{13}-x_{3}\sigma _{12}\\x_{3}\sigma _{11}\\-x_{2}\sigma _{11}\end{bmatrix}}\,dx_{3}\,dx_{2}\,.$ Stress resultants in plates and shells For plates and shells, the x1 and x2 dimensions are much larger than the size in the x3 direction. Integration over the area of cross-section would have to include one of the larger dimensions and would lead to a model that is too simple for practical calculations. For this reason the stresses are only integrated through the thickness and the stress resultants are typically expressed in units of force per unit length (or moment per unit length) instead of the true force and moment as is the case for beams. Membrane and shear forces For plates and shells we have to consider two cross-sections. The first is perpendicular to the x1 axis and the second is perpendicular to the x2 axis. Following the same procedure as for beams, and keeping in mind that the resultants are now per unit length, we have $\mathbf {F} _{1}=\int _{-t/2}^{t/2}(\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3})\,dx_{3}\quad {\text{and}}\quad \mathbf {F} _{2}=\int _{-t/2}^{t/2}(\sigma _{12}\mathbf {e} _{1}+\sigma _{22}\mathbf {e} _{2}+\sigma _{23}\mathbf {e} _{3})\,dx_{3}$ We can write the above as $\mathbf {F} _{1}=N_{11}\mathbf {e} _{1}+N_{12}\mathbf {e} _{2}+V_{1}\mathbf {e} _{3}\quad {\text{and}}\quad \mathbf {F} _{2}=N_{12}\mathbf {e} _{1}+N_{22}\mathbf {e} _{2}+V_{2}\mathbf {e} _{3}$ where the membrane forces are defined as ${\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}:=\int _{-t/2}^{t/2}{\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}\,dx_{3}$ and the shear forces are defined as ${\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}=\int _{-t/2}^{t/2}{\begin{bmatrix}\sigma _{13}\\\sigma _{23}\end{bmatrix}}\,dx_{3}\,.$ Bending moments For the bending moment resultants, we have $\mathbf {M} _{1}=\int _{-t/2}^{t/2}\mathbf {r} \times (\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3})\,dx_{3}\quad {\text{and}}\quad \mathbf {M} _{2}=\int _{-t/2}^{t/2}\mathbf {r} \times (\sigma _{12}\mathbf {e} _{1}+\sigma _{22}\mathbf {e} _{2}+\sigma _{23}\mathbf {e} _{3})\,dx_{3}$ where r = x3 e3. Expanding these expressions we have, $\mathbf {M} _{1}=\int _{-t/2}^{t/2}[-x_{3}\sigma _{12}\mathbf {e} _{1}+x_{3}\sigma _{11}\mathbf {e} _{2}]\,dx_{3}\quad {\text{and}}\quad \mathbf {M} _{2}=\int _{-t/2}^{t/2}[-x_{3}\sigma _{22}\mathbf {e} _{1}+x_{3}\sigma _{12}\mathbf {e} _{2}]\,dx_{3}$ Define the bending moment resultants such that $\mathbf {M} _{1}=:-M_{12}\mathbf {e} _{1}+M_{11}\mathbf {e} _{2}\quad {\text{and}}\quad \mathbf {M} _{2}=:-M_{22}\mathbf {e} _{1}+M_{12}\mathbf {e} _{2}\,.$ Then, the bending moment resultants are given by ${\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}:=\int _{-t/2}^{t/2}x_{3}\,{\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}\,dx_{3}\,.$ These are the resultants that are often found in the literature but care has to be taken to make sure that the signs are correctly interpreted. See also • Shear force • Bending moment • Plate theory • Bending of plates • Kirchhoff–Love plate theory • Mindlin–Reissner plate theory • Vibration of plates References 1. Barbero, Ever J. (2010). Introduction to composite materials design. Boca Raton, FL: CRC Press. ISBN 978-1-4200-7915-9.
Wikipedia
Stretching field In applied mathematics, stretching fields provide the local deformation of an infinitesimal circular fluid element over a finite time interval ∆t. The logarithm of the stretching (after first dividing by ∆t) gives the finite-time Lyapunov exponent λ for separation of nearby fluid elements at each point in a flow. For periodic two-dimensional flows, stretching fields have been shown to be closely related to the mixing of a passive scalar concentration field. Until recently, however, the extension of these ideas to systems that are non-periodic or weakly turbulent has been possible only in numerical simulations.
Wikipedia
Strichartz estimate In mathematical analysis, Strichartz estimates are a family of inequalities for linear dispersive partial differential equations. These inequalities establish size and decay of solutions in mixed norm Lebesgue spaces. They were first noted by Robert Strichartz and arose out of connections to the Fourier restriction problem.[1] Examples Consider the linear Schrödinger equation in $\mathbb {R} ^{d}$ with h = m = 1. Then the solution for initial data $u_{0}$ is given by $e^{it\Delta /2}u_{0}$. Let q and r be real numbers satisfying $2\leq q,r\leq \infty $; ${\frac {2}{q}}+{\frac {d}{r}}={\frac {d}{2}}$; and $(q,r,d)\neq (2,\infty ,2)$. In this case the homogeneous Strichartz estimates take the form:[2] $\|e^{it\Delta /2}u_{0}\|_{L_{t}^{q}L_{x}^{r}}\leq C_{d,q,r}\|u_{0}\|_{L_{x}^{2}}.$ Further suppose that ${\tilde {q}},{\tilde {r}}$ satisfy the same restrictions as $q,r$ and ${\tilde {q}}',{\tilde {r}}'$ are their dual exponents, then the dual homogeneous Strichartz estimates take the form:[2] $\left\|\int _{\mathbb {R} }e^{-is\Delta /2}F(s)\,ds\right\|_{L_{x}^{2}}\leq C_{d,{\tilde {q}},{\tilde {r}}}\|F\|_{L_{t}^{{\tilde {q}}'}L_{x}^{{\tilde {r}}'}}.$ The inhomogeneous Strichartz estimates are:[2] $\left\|\int _{s<t}e^{i(t-s)\Delta /2}F(s)\,ds\right\|_{L_{t}^{q}L_{x}^{r}}\leq C_{d,q,r,{\tilde {q}},{\tilde {r}}}\|F\|_{L_{t}^{{\tilde {q}}'}L_{x}^{{\tilde {r}}'}}.$ References 1. R.S. Strichartz (1977), "Restriction of Fourier Transform to Quadratic Surfaces and Decay of Solutions of Wave Equations", Duke Math. J., 44 (3): 705–713, doi:10.1215/s0012-7094-77-04430-1 2. Tao, Terence (2006), Nonlinear dispersive equations: Local and global analysis, CBMS Regional Conference Series in Mathematics, vol. 106, ISBN 978-0-8218-4143-3
Wikipedia
Epigraph (mathematics) In mathematics, the epigraph or supergraph[1] of a function $f:X\to [-\infty ,\infty ]$ valued in the extended real numbers $[-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}$ is the set, denoted by $\operatorname {epi} f,$ of all points in the Cartesian product $X\times \mathbb {R} $ lying on or above its graph.[2] The strict epigraph $\operatorname {epi} _{S}f$ is the set of points in $X\times \mathbb {R} $ lying strictly above its graph. Importantly, although both the graph and epigraph of $f$ consists of points in $X\times [-\infty ,\infty ],$ the epigraph consists entirely of points in the subset $X\times \mathbb {R} ,$ which is not necessarily true of the graph of $f.$ If the function takes $\pm \infty $ as a value then $\operatorname {graph} f$ will not be a subset of its epigraph $\operatorname {epi} f.$ For example, if $f\left(x_{0}\right)=\infty $ then the point $\left(x_{0},f\left(x_{0}\right)\right)=\left(x_{0},\infty \right)$ will belong to $\operatorname {graph} f$ but not to $\operatorname {epi} f.$ These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa. The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions.[2] Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in $[-\infty ,\infty ]$ instead of continuous functions valued in a vector space (such as $\mathbb {R} $ or $\mathbb {R} ^{2}$).[2] This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph.[2] Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples. Definition The definition of the epigraph was inspired by that of the graph of a function, where the graph of $f:X\to Y$ is defined to be the set $\operatorname {graph} f:=\left\{(x,y)\in X\times Y~:~y=f(x)\right\}.$ The epigraph or supergraph of a function $f:X\to [-\infty ,\infty ]$ valued in the extended real numbers $[-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}$ is the set[2] ${\begin{alignedat}{4}\operatorname {epi} f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\right\}\\&=\left[f^{-1}(-\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}\{x\}\times [f(x),\infty )~~~{\text{ (all sets being unioned are pairwise disjoint). }}\end{alignedat}}$ In the union over $x\in f^{-1}(\mathbb {R} )$ that appears above on the right hand side of the last line, the set $\{x\}\times [f(x),\infty )$ may be interpreted as being a "vertical ray" consisting of $(x,f(x))$ and all points in $X\times \mathbb {R} $ "directly above" it. Similarly, the set of points on or below the graph of a function is its hypograph. The strict epigraph is the epigraph with the graph removed: ${\begin{alignedat}{4}\operatorname {epi} _{S}f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r>f(x)\right\}\\&=\operatorname {epi} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}\{x\}\times (f(x),\infty )~~~{\text{ (all sets being unioned are pairwise disjoint, some may be empty). }}\end{alignedat}}$ Relationships with other sets Despite the fact that $f$ might take one (or both) of $\pm \infty $ as a value (in which case its graph would not be a subset of $X\times \mathbb {R} $), the epigraph of $f$ is nevertheless defined to be a subset of $X\times \mathbb {R} $ rather than of $X\times [-\infty ,\infty ].$ This is intentional because when $X$ is a vector space then so is $X\times \mathbb {R} $ but $X\times [-\infty ,\infty ]$ is never a vector space[2] (since the extended real number line $[-\infty ,\infty ]$ is not a vector space). More generally, if $X$ is only a non-empty subset of some vector space then $X\times [-\infty ,\infty ]$ is never even a subset of any vector space. The epigraph being a subset of a vector space allows for tools related to real analysis and functional analysis (and other fields) to be more readily applied. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be any linear space[1] or even an arbitrary set[3] instead of $\mathbb {R} ^{n}$. The strict epigraph $\operatorname {epi} _{S}f$ and the graph $\operatorname {graph} f$ are always disjoint. The epigraph of a function $f:X\to [-\infty ,\infty ]$ is related to its graph and strict epigraph by $\,\operatorname {epi} f\,\subseteq \,\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f$ where set equality holds if and only if $f$ is real-valued. However, $\operatorname {epi} f=\left[\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f\right]\,\cap \,\left[X\times \mathbb {R} \right]$ always holds. Reconstructing functions from epigraphs The epigraph is empty if and only if the function is identically equal to infinity. Just as any function can be reconstructed from its graph, so too can any extended real-valued function $f$ on $X$ be reconstructed from its epigraph $E:=\operatorname {epi} f$ (even when $f$ takes on $\pm \infty $ as a value). Given $x\in X,$ the value $f(x)$ can be reconstructed from the intersection $E\cap \left(\{x\}\times \mathbb {R} \right)$ of $E$ with the "vertical line" $\{x\}\times \mathbb {R} $ passing through $x$ as follows: • case 1: $E\cap \left(\{x\}\times \mathbb {R} \right)=\varnothing $ if and only if $f(x)=\infty ,$ • case 2: $E\cap \left(\{x\}\times \mathbb {R} \right)=\{x\}\times \mathbb {R} $ if and only if $f(x)=-\infty ,$ • case 3: otherwise, $E\cap \left(\{x\}\times \mathbb {R} \right)$ is necessarily of the form $\{x\}\times [f(x),\infty ),$ from which the value of $f(x)$ can be obtained by taking the infimum of the interval. The above observations can be combined to give a single formula for $f(x)$ in terms of $E:=\operatorname {epi} f.$ Specifically, for any $x\in X,$ $f(x)=\inf _{}\{r\in \mathbb {R} ~:~(x,r)\in E\}$ where by definition, $\inf _{}\varnothing :=\infty .$ :=\infty .} This same formula can also be used to reconstruct $f$ from its strict epigraph $E:=\operatorname {epi} _{S}f.$ Relationships between properties of functions and their epigraphs A function is convex if and only if its epigraph is a convex set. The epigraph of a real affine function $g:\mathbb {R} ^{n}\to \mathbb {R} $ is a halfspace in $\mathbb {R} ^{n+1}.$ A function is lower semicontinuous if and only if its epigraph is closed. See also • Effective domain • Hypograph (mathematics) – Mathematical analysis termPages displaying wikidata descriptions as a fallback • Proper convex function Citations Wikimedia Commons has media related to epigraphs und hypographs. 1. Pekka Neittaanmäki; Sergey R. Repin (2004). Reliable Methods for Computer Simulation: Error Control and Posteriori Estimates. Elsevier. p. 81. ISBN 978-0-08-054050-4. 2. Rockafellar & Wets 2009, pp. 1–37. 3. Charalambos D. Aliprantis; Kim C. Border (2007). Infinite Dimensional Analysis: A Hitchhiker's Guide (3rd ed.). Springer Science & Business Media. p. 8. ISBN 978-3-540-32696-0. References • Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. • Rockafellar, Ralph Tyrell (1996), Convex Analysis, Princeton University Press, Princeton, NJ. ISBN 0-691-01586-4. Convex analysis and variational analysis Basic concepts • Convex combination • Convex function • Convex set Topics (list) • Choquet theory • Convex geometry • Convex metric space • Convex optimization • Duality • Lagrange multiplier • Legendre transformation • Locally convex topological vector space • Simplex Maps • Convex conjugate • Concave • (Closed • K- • Logarithmically • Proper • Pseudo- • Quasi-) Convex function • Invex function • Legendre transformation • Semi-continuity • Subderivative Main results (list) • Carathéodory's theorem • Ekeland's variational principle • Fenchel–Moreau theorem • Fenchel-Young inequality • Jensen's inequality • Hermite–Hadamard inequality • Krein–Milman theorem • Mazur's lemma • Shapley–Folkman lemma • Robinson-Ursescu • Simons • Ursescu Sets • Convex hull • (Orthogonally, Pseudo-) Convex set • Effective domain • Epigraph • Hypograph • John ellipsoid • Lens • Radial set/Algebraic interior • Zonotope Series • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality • Dual system • Duality gap • Strong duality • Weak duality Applications and related • Convexity in economics
Wikipedia
Hypograph (mathematics) In mathematics, the hypograph or subgraph of a function $f:\mathbb {R} ^{n}\rightarrow \mathbb {R} $ is the set of points lying on or below its graph. A related definition is that of such a function's epigraph, which is the set of points on or above the function's graph. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be an arbitrary set[1] instead of $\mathbb {R} ^{n}$. Definition The definition of the hypograph was inspired by that of the graph of a function, where the graph of $f:X\to Y$ is defined to be the set $\operatorname {graph} f:=\left\{(x,y)\in X\times Y~:~y=f(x)\right\}.$ The hypograph or subgraph of a function $f:X\to [-\infty ,\infty ]$ valued in the extended real numbers $[-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}$ is the set[2] ${\begin{alignedat}{4}\operatorname {hyp} f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r\leq f(x)\right\}\\&=\left[f^{-1}(\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}\{x\}\times (-\infty ,f(x)].\end{alignedat}}$ Similarly, the set of points on or above the function is its epigraph. The strict hypograph is the hypograph with the graph removed: ${\begin{alignedat}{4}\operatorname {hyp} _{S}f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r<f(x)\right\}\\&=\operatorname {hyp} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}\{x\}\times (-\infty ,f(x)).\end{alignedat}}$ Despite the fact that $f$ might take one (or both) of $\pm \infty $ as a value (in which case its graph would not be a subset of $X\times \mathbb {R} $), the hypograph of $f$ is nevertheless defined to be a subset of $X\times \mathbb {R} $ rather than of $X\times [-\infty ,\infty ].$ Properties The hypograph of a function $f$ is empty if and only if $f$ is identically equal to negative infinity. A function is concave if and only if its hypograph is a convex set. The hypograph of a real affine function $g:\mathbb {R} ^{n}\to \mathbb {R} $ is a halfspace in $\mathbb {R} ^{n+1}.$ A function is upper semicontinuous if and only if its hypograph is closed. See also • Effective domain • Epigraph (mathematics) – the set of points lying on or above the graph of a functionPages displaying wikidata descriptions as a fallback • Proper convex function Citations Wikimedia Commons has media related to epigraphs und hypographs. 1. Charalambos D. Aliprantis; Kim C. Border (2007). Infinite Dimensional Analysis: A Hitchhiker's Guide (3rd ed.). Springer Science & Business Media. pp. 8–9. ISBN 978-3-540-32696-0. 2. Rockafellar & Wets 2009, pp. 1–37. References • Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. Convex analysis and variational analysis Basic concepts • Convex combination • Convex function • Convex set Topics (list) • Choquet theory • Convex geometry • Convex metric space • Convex optimization • Duality • Lagrange multiplier • Legendre transformation • Locally convex topological vector space • Simplex Maps • Convex conjugate • Concave • (Closed • K- • Logarithmically • Proper • Pseudo- • Quasi-) Convex function • Invex function • Legendre transformation • Semi-continuity • Subderivative Main results (list) • Carathéodory's theorem • Ekeland's variational principle • Fenchel–Moreau theorem • Fenchel-Young inequality • Jensen's inequality • Hermite–Hadamard inequality • Krein–Milman theorem • Mazur's lemma • Shapley–Folkman lemma • Robinson-Ursescu • Simons • Ursescu Sets • Convex hull • (Orthogonally, Pseudo-) Convex set • Effective domain • Epigraph • Hypograph • John ellipsoid • Lens • Radial set/Algebraic interior • Zonotope Series • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality • Dual system • Duality gap • Strong duality • Weak duality Applications and related • Convexity in economics
Wikipedia
Strict initial object In the mathematical discipline of category theory, a strict initial object is an initial object 0 of a category C with the property that every morphism in C with codomain 0 is an isomorphism. In a Cartesian closed category, every initial object is strict.[1] Also, if C is a distributive or extensive category, then the initial object 0 of C is strict.[2] References 1. McLarty, Colin (4 June 1992). Elementary Categories, Elementary Toposes. Clarendon Press. ISBN 0191589497. Retrieved 13 February 2017. 2. Carboni, Aurelio; Lack, Stephen; Walters, R.F.C. (3 February 1993). "Introduction to extensive and distributive categories". Journal of Pure and Applied Algebra. 84 (2): 145–158. doi:10.1016/0022-4049(93)90035-R. External links • Strict initial object at the nLab
Wikipedia
Partially ordered set In mathematics, especially order theory, a partial order on a set is an arrangement such that, for certain pairs of elements, one precedes the other. The word partial is used to indicate that not every pair of elements needs to be comparable; that is, there may be pairs for which neither element precedes the other. Partial orders thus generalize total orders, in which every pair is comparable. Transitive binary relations Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Total, Semiconnex Anti- reflexive Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗ Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗ Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗ Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗ Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗ Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗ Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗ Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗ Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗ Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$ Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively. All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$ A term's definition may require additional properties that are not listed in this table. Formally, a partial order is a homogeneous binary relation that is reflexive, transitive and antisymmetric. A partially ordered set (poset for short) is a set on which a partial order is defined. Partial order relations The term partial order usually refers to the reflexive partial order relations, referred to in this article as non-strict partial orders. However some authors use the term for the other common type of partial order relations, the irreflexive partial order relations, also called strict partial orders. Strict and non-strict partial orders can be put into a one-to-one correspondence, so for every strict partial order there is a unique corresponding non-strict partial order, and vice versa. Partial orders A reflexive, weak,[1] or non-strict partial order,[2] commonly referred to simply as a partial order, is a homogeneous relation ≤ on a set $P$ that is reflexive, antisymmetric, and transitive. That is, for all $a,b,c\in P,$ it must satisfy: 1. Reflexivity: $a\leq a$, i.e. every element is related to itself. 2. Antisymmetry: if $a\leq b$ and $b\leq a$ then $a=b$, i.e. no two distinct elements precede each other. 3. Transitivity: if $a\leq b$ and $b\leq c$ then $a\leq c$. A non-strict partial order is also known as an antisymmetric preorder. Strict partial orders An irreflexive, strong,[1] or strict partial order is a homogeneous relation < on a set $P$ that is irreflexive, asymmetric, and transitive; that is, it satisfies the following conditions for all $a,b,c\in P:$ 1. Irreflexivity: not $a<a$, i.e. no element is related to itself (also called anti-reflexive). 2. Asymmetry: if $a<b$ then not $b<a$. 3. Transitivity: if $a<b$ and $b<c$ then $a<c$. Irreflexivity and transitivity together imply asymmetry. Also, asymmetry implies irreflexivity. In other words, a transitive relation is asymmetric if and only if it is irreflexive.[3] So the definition is the same if it omits either irreflexivity or asymmetry (but not both). A strict partial order is also known as an asymmetric strict preorder. Correspondence of strict and non-strict partial order relations Strict and non-strict partial orders on a set $P$ are closely related. A non-strict partial order $\leq $ may be converted to a strict partial order by removing all relationships of the form $a\leq a;$ that is, the strict partial order is the set $<\;:=\ \leq \ \setminus \ \Delta _{P}$ where $\Delta _{P}:=\{(p,p):p\in P\}$ is the identity relation on $P\times P$ and $\;\setminus \;$ denotes set subtraction. Conversely, a strict partial order < on $P$ may be converted to a non-strict partial order by adjoining all relationships of that form; that is, $\leq \;:=\;\Delta _{P}\;\cup \;<\;$ is a non-strict partial order. Thus, if $\leq $ is a non-strict partial order, then the corresponding strict partial order < is the irreflexive kernel given by $a<b{\text{ if }}a\leq b{\text{ and }}a\neq b.$ Conversely, if < is a strict partial order, then the corresponding non-strict partial order $\leq $ is the reflexive closure given by: $a\leq b{\text{ if }}a<b{\text{ or }}a=b.$ Dual orders Main article: Duality (order theory) The dual (or opposite) $R^{\text{op}}$ of a partial order relation $R$ is defined by letting $R^{\text{op}}$ be the converse relation of $R$, i.e. $xR^{\text{op}}y$ if and only if $yRx$. The dual of a non-strict partial order is a non-strict partial order,[4] and the dual of a strict partial order is a strict partial order. The dual of a dual of a relation is the original relation. Notation Given a set $P$ and a partial order relation, typically the non-strict partial order $\leq $, we may uniquely extend our notation to define four partial order relations $\leq ,<,\geq ,{\text{ and }}>$, where $\leq $ is a non-strict partial order relation on $P$, $<$ is the associated strict partial order relation on $P$ (the irreflexive kernel of $\leq $), $\geq $ is the dual of $\leq $, and $>$ is the dual of $<$. Strictly speaking, the term partially ordered set refers to a set with all of these relations defined appropriately. But practically, one need only consider a single relation, $(P,\leq )$ or $(P,<)$, or, in rare instances, the strict and non-strict relations together, $(P,\leq ,<)$.[5] The term ordered set is sometimes used as a shorthand for partially ordered set, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as "ordered sets", especially in areas where these structures are more common than posets. Some authors use different symbols than $\leq $ such as $\sqsubseteq $[6] or $\preceq $[7] to distinguish partial orders from total orders. When referring to partial orders, $\leq $ should not be taken as the complement of $>$. The relation $>$ is the converse of the irreflexive kernel of $\leq $, which is always a subset of the complement of $\leq $, but $>$ is equal to the complement of $\leq $ if, and only if, $\leq $ is a total order.[lower-alpha 1] Alternative definitions Another way of defining a partial order, found in computer science, is via a notion of comparison. Specifically, given $\leq ,<,\geq ,{\text{ and }}>$ as defined previously, it can be observed that two elements x and y may stand in any of four mutually exclusive relationships to each other: either x < y, or x = y, or x > y, or x and y are incomparable. This can be represented by a function ${\text{compare}}:P\times P\to \{<,>,=,\vert \}$ that returns one of four codes when given two elements.[8][9] This definition is equivalent to a partial order on a setoid, where equality is taken to be a defined equivalence relation rather than the primitive notion of set equality.[10] Wallis defines a more general notion of a partial order relation as any homogeneous relation that is transitive and antisymmetric. This includes both reflexive and irreflexive partial orders as subtypes.[1] A finite poset can be visualized through its Hasse diagram.[11] Specifically, taking a strict partial order relation $(P,<)$, a directed acyclic graph (DAG) may be constructed by taking each element of $P$ to be a node and each element of $<$ to be an edge. The transitive reduction of this DAG[lower-alpha 2] is then the Hasse diagram. Similarly this process can be reversed to construct strict partial orders from certain DAGs. In contrast, the graph associated to a non-strict partial order has self-loops at every node and therefore is not a DAG; when a non-strict order is said to be depicted by a Hasse diagram, actually the corresponding strict order is shown. Examples Standard examples of posets arising in mathematics include: • The real numbers, or in general any totally ordered set, ordered by the standard less-than-or-equal relation ≤, is a partial order. • On the real numbers $\mathbb {R} $, the usual less than relation < is a strict partial order. The same is also true of the usual greater than relation > on $\mathbb {R} $. • By definition, every strict weak order is a strict partial order. • The set of subsets of a given set (its power set) ordered by inclusion (see Fig.1). Similarly, the set of sequences ordered by subsequence, and the set of strings ordered by substring. • The set of natural numbers equipped with the relation of divisibility. (see Fig.3 and Fig.6) • The vertex set of a directed acyclic graph ordered by reachability. • The set of subspaces of a vector space ordered by inclusion. • For a partially ordered set P, the sequence space containing all sequences of elements from P, where sequence a precedes sequence b if every item in a precedes the corresponding item in b. Formally, $\left(a_{n}\right)_{n\in \mathbb {N} }\leq \left(b_{n}\right)_{n\in \mathbb {N} }$ if and only if $a_{n}\leq b_{n}$ for all $n\in \mathbb {N} $; that is, a componentwise order. • For a set X and a partially ordered set P, the function space containing all functions from X to P, where f ≤ g if and only if f(x) ≤ g(x) for all $x\in X.$ • A fence, a partially ordered set defined by an alternating sequence of order relations a < b > c < d ... • The set of events in special relativity and, in most cases,[lower-alpha 3] general relativity, where for two events X and Y, X ≤ Y if and only if Y is in the future light cone of X. An event Y can only be causally affected by X if X ≤ Y. One familiar example of a partially ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, with neither being a descendant of the other. Orders on the Cartesian product of partially ordered sets Fig. 4a Lexicographic order on $\mathbb {N} \times \mathbb {N} $ Fig. 4b Product order on $\mathbb {N} \times \mathbb {N} $ Fig. 4c Reflexive closure of strict direct product order on $\mathbb {N} \times \mathbb {N} .$ Elements covered by (3, 3) and covering (3, 3) are highlighted in green and red, respectively. In order of increasing strength, i.e., decreasing sets of pairs, three of the possible partial orders on the Cartesian product of two partially ordered sets are (see Fig.4): • the lexicographical order:   (a, b) ≤ (c, d) if a < c or (a = c and b ≤ d); • the product order:   (a, b) ≤ (c, d) if a ≤ c and b ≤ d; • the reflexive closure of the direct product of the corresponding strict orders:   (a, b) ≤ (c, d) if (a < c and b < d) or (a = c and b = d). All three can similarly be defined for the Cartesian product of more than two sets. Applied to ordered vector spaces over the same field, the result is in each case also an ordered vector space. See also orders on the Cartesian product of totally ordered sets. Sums of partially ordered sets Another way to combine two (disjoint) posets is the ordinal sum[12] (or linear sum),[13] Z = X ⊕ Y, defined on the union of the underlying sets X and Y by the order a ≤Z b if and only if: • a, b ∈ X with a ≤X b, or • a, b ∈ Y with a ≤Y b, or • a ∈ X and b ∈ Y. If two posets are well-ordered, then so is their ordinal sum.[14] Series-parallel partial orders are formed from the ordinal sum operation (in this context called series composition) and another operation called parallel composition. Parallel composition is the disjoint union of two partially ordered sets, with no order relation between elements of one set and elements of the other set. Derived notions The examples use the poset $({\mathcal {P}}(\{x,y,z\}),\subseteq )$ consisting of the set of all subsets of a three-element set $\{x,y,z\},$ ordered by set inclusion (see Fig.1). • a is related to b when a ≤ b. This does not imply that b is also related to a, because the relation need not be symmetric. For example, $\{x\}$ is related to $\{x,y\},$ but not the reverse. • a and b are comparable if a ≤ b or b ≤ a. Otherwise they are incomparable. For example, $\{x\}$ and $\{x,y,z\}$ are comparable, while $\{x\}$ and $\{y\}$ are not. • A total order or linear order is a partial order under which every pair of elements is comparable, i.e. trichotomy holds. For example, the natural numbers with their standard order. • A chain is a subset of a poset that is a totally ordered set. For example, $\{\{\,\},\{x\},\{x,y,z\}\}$ is a chain. • An antichain is a subset of a poset in which no two distinct elements are comparable. For example, the set of singletons $\{\{x\},\{y\},\{z\}\}.$ • An element a is said to be strictly less than an element b, if a ≤ b and $a\neq b.$ For example, $\{x\}$ is strictly less than $\{x,y\}.$ • An element a is said to be covered by another element b, written a ⋖ b (or a <: b), if a is strictly less than b and no third element c fits between them; formally: if both a ≤ b and $a\neq b$ are true, and a ≤ c ≤ b is false for each c with $a\neq c\neq b.$ Using the strict order <, the relation a ⋖ b can be equivalently rephrased as "a < b but not a < c < b for any c". For example, $\{x\}$ is covered by $\{x,z\},$ but is not covered by $\{x,y,z\}.$ Extrema There are several notions of "greatest" and "least" element in a poset $P,$ notably: • Greatest element and least element: An element $g\in P$ is a greatest element if $a\leq g$ for every element $a\in P.$ An element $m\in P$ is a least element if $m\leq a$ for every element $a\in P.$ A poset can only have one greatest or least element. In our running example, the set $\{x,y,z\}$ is the greatest element, and $\{\,\}$ is the least. • Maximal elements and minimal elements: An element $g\in P$ is a maximal element if there is no element $a\in P$ such that $a>g.$ Similarly, an element $m\in P$ is a minimal element if there is no element $a\in P$ such that $a<m.$ If a poset has a greatest element, it must be the unique maximal element, but otherwise there can be more than one maximal element, and similarly for least elements and minimal elements. In our running example, $\{x,y,z\}$ and $\{\,\}$ are the maximal and minimal elements. Removing these, there are 3 maximal elements and 3 minimal elements (see Fig.5). • Upper and lower bounds: For a subset A of P, an element x in P is an upper bound of A if a ≤ x, for each element a in A. In particular, x need not be in A to be an upper bound of A. Similarly, an element x in P is a lower bound of A if a ≥ x, for each element a in A. A greatest element of P is an upper bound of P itself, and a least element is a lower bound of P. In our example, the set $\{x,y\}$ is an upper bound for the collection of elements $\{\{x\},\{y\}\}.$ As another example, consider the positive integers, ordered by divisibility: 1 is a least element, as it divides all other elements; on the other hand this poset does not have a greatest element. This partially ordered set does not even have any maximal elements, since any g divides for instance 2g, which is distinct from it, so g is not maximal. If the number 1 is excluded, while keeping divisibility as ordering on the elements greater than 1, then the resulting poset does not have a least element, but any prime number is a minimal element for it. In this poset, 60 is an upper bound (though not a least upper bound) of the subset $\{2,3,5,10\},$ which does not have any lower bound (since 1 is not in the poset); on the other hand 2 is a lower bound of the subset of powers of 2, which does not have any upper bound. If the number 0 is added, this will be the greatest element, since this is a multiple of every integer (see Fig.6). Mappings between partially ordered sets Fig.7a Order-preserving, but not order-reflecting (since f(u) ≼ f(v), but not u $\leq $ v) map. Fig.7b Order isomorphism between the divisors of 120 (partially ordered by divisibility) and the divisor-closed subsets of {2, 3, 4, 5, 8} (partially ordered by set inclusion) Given two partially ordered sets (S, ≤) and (T, ≼), a function $f:S\to T$ is called order-preserving, or monotone, or isotone, if for all $x,y\in S,$ $x\leq y$ implies f(x) ≼ f(y). If (U, ≲) is also a partially ordered set, and both $f:S\to T$ and $g:T\to U$ are order-preserving, their composition $g\circ f:S\to U$ is order-preserving, too. A function $f:S\to T$ is called order-reflecting if for all $x,y\in S,$ f(x) ≼ f(y) implies $x\leq y.$ If f is both order-preserving and order-reflecting, then it is called an order-embedding of (S, ≤) into (T, ≼). In the latter case, f is necessarily injective, since $f(x)=f(y)$ implies $x\leq y{\text{ and }}y\leq x$ and in turn $x=y$ according to the antisymmetry of $\leq .$ If an order-embedding between two posets S and T exists, one says that S can be embedded into T. If an order-embedding $f:S\to T$ is bijective, it is called an order isomorphism, and the partial orders (S, ≤) and (T, ≼) are said to be isomorphic. Isomorphic orders have structurally similar Hasse diagrams (see Fig.7a). It can be shown that if order-preserving maps $f:S\to T$ and $g:T\to U$ exist such that $g\circ f$ and $f\circ g$ yields the identity function on S and T, respectively, then S and T are order-isomorphic.[15] For example, a mapping $f:\mathbb {N} \to \mathbb {P} (\mathbb {N} )$ from the set of natural numbers (ordered by divisibility) to the power set of natural numbers (ordered by set inclusion) can be defined by taking each number to the set of its prime divisors. It is order-preserving: if x divides y, then each prime divisor of x is also a prime divisor of y. However, it is neither injective (since it maps both 12 and 6 to $\{2,3\}$) nor order-reflecting (since 12 does not divide 6). Taking instead each number to the set of its prime power divisors defines a map $g:\mathbb {N} \to \mathbb {P} (\mathbb {N} )$ that is order-preserving, order-reflecting, and hence an order-embedding. It is not an order-isomorphism (since it, for instance, does not map any number to the set $\{4\}$), but it can be made one by restricting its codomain to $g(\mathbb {N} ).$ Fig.7b shows a subset of $\mathbb {N} $ and its isomorphic image under g. The construction of such an order-isomorphism into a power set can be generalized to a wide class of partial orders, called distributive lattices, see "Birkhoff's representation theorem". Number of partial orders Sequence A001035 in OEIS gives the number of partial orders on a set of n labeled elements: Number of n-element binary relations of different types Elem­ents Any Transitive Reflexive Symmetric Preorder Partial order Total preorder Total order Equivalence relation 0111111111 1221211111 216134843322 3512171646429191365 465,5363,9944,0961,024355219752415 n 2n2 2n2−n 2n(n+1)/2 $ \sum _{k=0}^{n}k!S(n,k)$ n! $ \sum _{k=0}^{n}S(n,k)$ OEIS A002416 A006905 A053763 A006125 A000798 A001035 A000670 A000142 A000110 Note that S(n, k) refers to Stirling numbers of the second kind. The number of strict partial orders is the same as that of partial orders. If the count is made only up to isomorphism, the sequence 1, 1, 2, 5, 16, 63, 318, ... (sequence A000112 in the OEIS) is obtained. Linear extension A partial order $\leq ^{*}$ on a set $X$ is an extension of another partial order $\leq $ on $X$ provided that for all elements $x,y\in X,$ whenever $x\leq y,$ it is also the case that $x\leq ^{*}y.$ A linear extension is an extension that is also a linear (that is, total) order. As a classic example, the lexicographic order of totally ordered sets is a linear extension of their product order. Every partial order can be extended to a total order (order-extension principle).[16] In computer science, algorithms for finding linear extensions of partial orders (represented as the reachability orders of directed acyclic graphs) are called topological sorting. In category theory Every poset (and every preordered set) may be considered as a category where, for objects $x$ and $y,$ there is at most one morphism from $x$ to $y.$ More explicitly, let hom(x, y) = {(x, y)} if x ≤ y (and otherwise the empty set) and $(y,z)\circ (x,y)=(x,z).$ Such categories are sometimes called posetal. In differential topology, homology theory (HT) is used for classifying equivalent smooth manifolds M, related to the geometrical shapes of M. Posets are equivalent to one another if and only if they are isomorphic. In a poset, the smallest element, if it exists, is an initial object, and the largest element, if it exists, is a terminal object. Also, every preordered set is equivalent to a poset. Finally, every subcategory of a poset is isomorphism-closed. In differential topology, homology theory (HT) is used for classifying equivalent smooth manifolds M, related to the geometrical shapes of M. In homology theory is given an axiomatic HT approach, especially to singular homology. The HT members are algebraic invariants under diffeomorphisms. The axiomatic HT category is taken in G. Kalmbach from the book Eilenberg-Steenrod (see the references) in order to show that the set theoretical topological concept for the HT definition can be extended to partial ordered sets P. Important are chains and filters in P (replacing shapes of M) for defining HT classifications, available for many P applications not related to set theory. Partial orders in topological spaces Main article: Partially ordered space If $P$ is a partially ordered set that has also been given the structure of a topological space, then it is customary to assume that $\{(a,b):a\leq b\}$ is a closed subset of the topological product space $P\times P.$ Under this assumption partial order relations are well behaved at limits in the sense that if $\lim _{i\to \infty }a_{i}=a,$ and $\lim _{i\to \infty }b_{i}=b,$ and for all $i,$ $a_{i}\leq b_{i},$ then $a\leq b.$[17] Intervals An interval in a poset P is a subset I of P with the property that, for any x and y in I and any z in P, if x ≤ z ≤ y, then z is also in I. (This definition generalizes the interval definition for real numbers.) For a ≤ b, the closed interval [a, b] is the set of elements x satisfying a ≤ x ≤ b (that is, a ≤ x and x ≤ b). It contains at least the elements a and b. Using the corresponding strict relation "<", the open interval (a, b) is the set of elements x satisfying a < x < b (i.e. a < x and x < b). An open interval may be empty even if a < b. For example, the open interval (0, 1) on the integers is empty since there are no integers I such that 0 < I < 1. The half-open intervals [a, b) and (a, b] are defined similarly. Sometimes the definitions are extended to allow a > b, in which case the interval is empty. An interval I is bounded if there exist elements $a,b\in P$ such that I ⊆ [a, b]. Every interval that can be represented in interval notation is obviously bounded, but the converse is not true. For example, let P = (0, 1) ∪ (1, 2) ∪ (2, 3) as a subposet of the real numbers. The subset (1, 2) is a bounded interval, but it has no infimum or supremum in P, so it cannot be written in interval notation using elements of P. A poset is called locally finite if every bounded interval is finite. For example, the integers are locally finite under their natural ordering. The lexicographical order on the cartesian product $\mathbb {N} \times \mathbb {N} $ is not locally finite, since (1, 2) ≤ (1, 3) ≤ (1, 4) ≤ (1, 5) ≤ ... ≤ (2, 1). Using the interval notation, the property "a is covered by b" can be rephrased equivalently as $[a,b]=\{a,b\}.$ This concept of an interval in a partial order should not be confused with the particular class of partial orders known as the interval orders. See also • Antimatroid, a formalization of orderings on a set that allows more general families of orderings than posets • Causal set, a poset-based approach to quantum gravity • Comparability graph – Graph linking pairs of comparable elements in a partial order • Complete partial order – term used in mathematical order theoryPages displaying wikidata descriptions as a fallback • Directed set – Mathematical ordering with upper bounds • Graded poset – partially ordered set equipped with a rank function, sometimes called a ranked posetPages displaying wikidata descriptions as a fallback • Incidence algebra – associative algebra used in combinatorics, a branch of mathematicsPages displaying wikidata descriptions as a fallback • Lattice – Set whose pairs have minima and maxima • Locally finite poset – MathematicsPages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces • Möbius function on posets – associative algebra used in combinatorics, a branch of mathematicsPages displaying wikidata descriptions as a fallback • Nested set collection • Order polytope • Ordered field – Algebraic object with an ordered structure • Ordered group – Group with a compatible partial orderPages displaying short descriptions of redirect targets • Ordered vector space – Vector space with a partial order • Poset topology, a kind of topological space that can be defined from any poset • Scott continuity – continuity of a function between two partial orders. • Semilattice – Partial order with joins • Semiorder – Numerical ordering with a margin of error • Szpilrajn extension theorem - every partial order is contained in some total order. • Stochastic dominance – partial order between random variablesPages displaying wikidata descriptions as a fallback • Strict weak ordering – strict partial order "<" in which the relation "neither a < b nor b < a" is transitive. • Total order – Order whose elements are all comparable • Tree – Data structure of set inclusion • Zorn's lemma – Mathematical proposition equivalent to the axiom of choice Notes 1. A proof can be found here. 2. which always exists and is unique, since $P$ is assumed to be finite 3. See General relativity#Time travel. Citations 1. Wallis, W. D. (14 March 2013). A Beginner's Guide to Discrete Mathematics. Springer Science & Business Media. p. 100. ISBN 978-1-4757-3826-1. 2. Simovici, Dan A. & Djeraba, Chabane (2008). "Partially Ordered Sets". Mathematical Tools for Data Mining: Set Theory, Partial Orders, Combinatorics. Springer. ISBN 9781848002012. 3. Flaška, V.; Ježek, J.; Kepka, T.; Kortelainen, J. (2007). "Transitive Closures of Binary Relations I". Acta Universitatis Carolinae. Mathematica et Physica. Prague: School of Mathematics - Physics Charles University. 48 (1): 55–69. Lemma 1.1 (iv). This source refers to asymmetric relations as "strictly antisymmetric". 4. Davey & Priestley (2002), pp. 14-15. 5. Avigad, Jeremy; Lewis, Robert Y.; van Doorn, Floris (29 March 2021). "13.2. More on Orderings". Logic and Proof (Release 3.18.4 ed.). Retrieved 24 July 2021. So we can think of every partial order as really being a pair, consisting of a weak partial order and an associated strict one. 6. Rounds, William C. (7 March 2002). "Lectures slides" (PDF). EECS 203: DISCRETE MATHEMATICS. Retrieved 23 July 2021. 7. Kwong, Harris (25 April 2018). "7.4: Partial and Total Ordering". A Spiral Workbook for Discrete Mathematics. Retrieved 23 July 2021. 8. "Finite posets". Sage 9.2.beta2 Reference Manual: Combinatorics. Retrieved 5 January 2022. compare_elements(x, y): Compare x and y in the poset. If x<y, return -1. If x=y, return 0. If x>y, return 1. If x and y are not comparable, return None. 9. Chen, Peter; Ding, Guoli; Seiden, Steve. On Poset Merging (PDF) (Technical report). p. 2. Retrieved 5 January 2022. A comparison between two elements s, t in S returns one of three distinct values, namely s≤t, s>t or s|t. 10. Prevosto, Virgile; Jaume, Mathieu (11 September 2003). Making proofs in a hierarchy of mathematical structures. CALCULEMUS-2003 - 11th Symposium on the Integration of Symbolic Computation and Mechanized Reasoning. Roma, Italy: Aracne. pp. 89–100. 11. Merrifield, Richard E.; Simmons, Howard E. (1989). Topological Methods in Chemistry. New York: John Wiley & Sons. pp. 28. ISBN 0-471-83817-9. Retrieved 27 July 2012. A partially ordered set is conveniently represented by a Hasse diagram... 12. Neggers, J.; Kim, Hee Sik (1998), "4.2 Product Order and Lexicographic Order", Basic Posets, World Scientific, pp. 62–63, ISBN 9789810235895 13. Davey & Priestley (2002), pp. 17–18. 14. P. R. Halmos (1974). Naive Set Theory. Springer. p. 82. ISBN 978-1-4757-1645-0. 15. Davey & Priestley (2002), pp. 23–24. 16. Jech, Thomas (2008) [1973]. The Axiom of Choice. Dover Publications. ISBN 978-0-486-46624-8. 17. Ward, L. E. Jr (1954). "Partially Ordered Topological Spaces". Proceedings of the American Mathematical Society. 5 (1): 144–161. doi:10.1090/S0002-9939-1954-0063016-5. hdl:10338.dmlcz/101379. References • Davey, B. A.; Priestley, H. A. (2002). Introduction to Lattices and Order (2nd ed.). New York: Cambridge University Press. ISBN 978-0-521-78451-1. • Deshpande, Jayant V. (1968). "On Continuity of a Partial Order". Proceedings of the American Mathematical Society. 19 (2): 383–386. doi:10.1090/S0002-9939-1968-0236071-7. • Schmidt, Gunther (2010). Relational Mathematics. Encyclopedia of Mathematics and its Applications. Vol. 132. Cambridge University Press. ISBN 978-0-521-76268-7. • Bernd Schröder (11 May 2016). Ordered Sets: An Introduction with Connections from Combinatorics to Topology. Birkhäuser. ISBN 978-3-319-29788-0. • Stanley, Richard P. (1997). Enumerative Combinatorics 1. Cambridge Studies in Advanced Mathematics. Vol. 49. Cambridge University Press. ISBN 0-521-66351-2. • Eilenberg, S. (2016). Foundations of Algebraic Topology. Princeton University Press. • Kalmbach, G. (1976). "Extension of Homology Theory to Partially Ordered Sets". J. Reine Angew. Math. 280: 134–156. External links Wikimedia Commons has media related to Hasse diagram. • OEIS sequence A001035 (Number of posets with n labeled elements) • OEIS sequence A000112 (Number of partially ordered sets ("posets") with n unlabeled elements.) Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice Authority control: National • Czech Republic
Wikipedia
Homogeneous function In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the degree; that is, if k is an integer, a function f of n variables is homogeneous of degree k if $f(sx_{1},\ldots ,sx_{n})=s^{k}f(x_{1},\ldots ,x_{n})$ For homogeneous linear maps, see Graded vector space § Homomorphisms. for every $x_{1},\ldots ,x_{n},$ and $s\neq 0.$ For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function $f:V\to W$ between two F-vector spaces is homogeneous of degree $k$ if $f(s\mathbf {v} )=s^{k}f(\mathbf {v} )$ (1) for all nonzero $s\in F$ and $v\in V.$ This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that $\mathbf {v} \in C$ implies $s\mathbf {v} \in C$ for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for $s>0,$ and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. Definitions The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers. The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to consider more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. General homogeneity Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that $sx\in C$ for all $x\in C$ and all nonzero $s\in F.$ A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies $f(sx)=s^{k}f(x)$ for some integer k, every $x\in C,$ and every nonzero $s\in F.$ The integer k is called the degree of homogeneity, or simply the degree of f. A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes. Positive homogeneity When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since $|-x|=|x|\neq -|x|$ if $x\neq 0.$ This remains true in the complex case, since the field of the complex numbers $\mathbb {C} $ and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions. Examples Simple example The function $f(x,y)=x^{2}+y^{2}$ is homogeneous of degree 2: $f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).$ Absolute value and norms The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since $|sx|=s|x|$ if $s>0,$ and $|sx|=-s|x|$ if $s<0.$ The absolute value of a complex number is a positively homogeneous function of degree $1$ over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. Linear functions Any linear map $f:V\to W$ between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: $f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )$ for all $\alpha \in {F}$ and $v\in V.$ Similarly, any multilinear function $f:V_{1}\times V_{2}\times \cdots V_{n}\to W$ is homogeneous of degree $n,$ by the definition of multilinearity: $f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})$ for all $\alpha \in {F}$ and $v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.$ Homogeneous polynomials Main article: Homogeneous polynomial Monomials in $n$ variables define homogeneous functions $f:\mathbb {F} ^{n}\to \mathbb {F} .$ For example, $f(x,y,z)=x^{5}y^{2}z^{3}\,$ is homogeneous of degree 10 since $f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,$ The degree is the sum of the exponents on the variables; in this example, $10=5+2+3.$ A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, $x^{5}+2x^{3}y^{2}+9xy^{4}$ is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degree $k$ with real coefficients that takes only positive values, one gets a positively homogeneous function of degree $k/d$ by raising it to the power $1/d.$ So for example, the following function is positively homogeneous of degree 1 but not homogeneous: $\left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.$ Min/max For every set of weights $w_{1},\dots ,w_{n},$ the following functions are positively homogeneous of degree 1, but not homogeneous: • $\min \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)$ (Leontief utilities) • $\max \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)$ Rational functions Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if $f$ is homogeneous of degree $m$ and $g$ is homogeneous of degree $n,$ then $f/g$ is homogeneous of degree $m-n$ away from the zeros of $g.$ Non-examples The homogeneous real functions of a single variable have the form $x\mapsto cx^{k}$ for some constant c. So, the affine function $x\mapsto x+5,$ the natural logarithm $x\mapsto \ln(x),$ and the exponential function $x\mapsto e^{x}$ are not homogeneous. Euler's theorem Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: Euler's homogeneous function theorem — If f is a (partial) function of n real variables that is positively homogeneous of degree k, and continuously differentiable in some open subset of $\mathbb {R} ^{n},$ then it satisfies in this open set the partial differential equation $k\,f(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}x_{i}{\frac {\partial f}{\partial x_{i}}}(x_{1},\ldots ,x_{n}).$ Conversely, every maximal continuously differentiable solution of this partial differentiable equation is a positively homogeneous function of degree k, defined on a positive cone (here, maximal means that the solution cannot be prolongated to a function with a larger domain). Proof For having simpler formulas, we set $\mathbf {x} =(x_{1},\ldots ,x_{n}).$ The first part results by using the chain rule for differentiating both sides of the equation $f(s\mathbf {x} )=s^{k}f(\mathbf {x} )$ with respect to $s,$ and taking the limit of the result when s tends to 1. The converse is proved by integrating a simple differential equation. Let $\mathbf {x} $ be in the interior of the domain of f. For s sufficiently close of 1, the function $ g(s)=f(s\mathbf {x} )$ is well defined. The partial differential equation implies that $sg'(s)=kf(s\mathbf {x} )=kg(s).$ The solutions of this linear differential equation have the form $g(s)=g(1)s^{k}.$ Therefore, $f(s\mathbf {x} )=g(s)=s^{k}g(1)=s^{k}f(\mathbf {x} ),$ if s is sufficiently close to 1. If this solution of the partial differential equation would not be defined for all positive s, then the functional equation would allow to prolongate the solution, and the partial differential equation implies that this prolongation is unique. So, the domain of a maximal solution of the partial differential equation is a linear cone, and the solution is positively homogeneous of degree k. $\square $ As a consequence, if $f:\mathbb {R} ^{n}\to \mathbb {R} $ is continuously differentiable and homogeneous of degree $k,$ its first-order partial derivatives $\partial f/\partial x_{i}$ are homogeneous of degree $k-1.$ This results from Euler's theorem by differentiating the partial differential equation with respect to one variable. In the case of a function of a single real variable ($n=1$), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form $f(x)=c_{+}x^{k}$ for $x>0$ and $f(x)=c_{-}x^{k}$ for $x<0.$ The constants $c_{+}$ and $c_{-}$ are not necessarily the same, as it is the case for the absolute value. Application to differential equations Main article: Homogeneous differential equation The substitution $v=y/x$ converts the ordinary differential equation $I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,$ where $I$ and $J$ are homogeneous functions of the same degree, into the separable differential equation $x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.$ Generalizations Homogeneity under a monoid action The definitions given above are all specialized cases of the following more general notion of homogeneity in which $X$ can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let $M$ be a monoid with identity element $1\in M,$ let $X$ and $Y$ be sets, and suppose that on both $X$ and $Y$ there are defined monoid actions of $M.$ Let $k$ be a non-negative integer and let $f:X\to Y$ be a map. Then $f$ is said to be homogeneous of degree $k$ over $M$ if for every $x\in X$ and $m\in M,$ $f(mx)=m^{k}f(x).$ If in addition there is a function $M\to M,$ denoted by $m\mapsto |m|,$ called an absolute value then $f$ is said to be absolutely homogeneous of degree $k$ over $M$ if for every $x\in X$ and $m\in M,$ $f(mx)=|m|^{k}f(x).$ A function is homogeneous over $M$ (resp. absolutely homogeneous over $M$) if it is homogeneous of degree $1$ over $M$ (resp. absolutely homogeneous of degree $1$ over $M$). More generally, it is possible for the symbols $m^{k}$ to be defined for $m\in M$ with $k$ being something other than an integer (for example, if $M$ is the real numbers and $k$ is a non-zero real number then $m^{k}$ is defined even though $k$ is not an integer). If this is the case then $f$ will be called homogeneous of degree $k$ over $M$ if the same equality holds: $f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.$ The notion of being absolutely homogeneous of degree $k$ over $M$ is generalized similarly. Distributions (generalized functions) A continuous function $f$ on $\mathbb {R} ^{n}$ is homogeneous of degree $k$ if and only if $\int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx$ for all compactly supported test functions $\varphi $; and nonzero real $t.$ Equivalently, making a change of variable $y=tx,$ $f$ is homogeneous of degree $k$ if and only if $t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy$ for all $t$ and all test functions $\varphi .$ The last display makes it possible to define homogeneity of distributions. A distribution $S$ is homogeneous of degree $k$ if $t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle $ for all nonzero real $t$ and all test functions $\varphi .$ Here the angle brackets denote the pairing between distributions and test functions, and $\mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}$ is the mapping of scalar division by the real number $t.$ Glossary of name variants Let $f:X\to Y$ be a map between two vector spaces over a field $\mathbb {F} $ (usually the real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} $). If $S$ is a set of scalars, such as $\mathbb {Z} ,$ $[0,\infty ),$ or $\mathbb {R} $ for example, then $f$ is said to be homogeneous over $S$ if $ f(sx)=sf(x)$ for every $x\in X$ and scalar $s\in S.$ For instance, every additive map between vector spaces is homogeneous over the rational numbers $S:=\mathbb {Q} $ although it might not be homogeneous over the real numbers $S:=\mathbb {R} .$ The following commonly encountered special cases and variations of this definition have their own terminology: 1. (Strict) Positive homogeneity:[1] $f(rx)=rf(x)$ for all $x\in X$ and all positive real $r>0.$ • When the function $f$ is valued in a vector space or field, then this property is logically equivalent[proof 1] to nonnegative homogeneity, which by definition means:[2] $f(rx)=rf(x)$ for all $x\in X$ and all non-negative real $r\geq 0.$ It is for this reason that positive homogeneity is often also called nonnegative homogeneity. However, for functions valued in the extended real numbers $[-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},$ which appear in fields like convex analysis, the multiplication $0\cdot f(x)$ will be undefined whenever $f(x)=\pm \infty $ and so these statements are not necessarily always interchangeable.[note 1] • This property is used in the definition of a sublinear function.[1][2] • Minkowski functionals are exactly those non-negative extended real-valued functions with this property. 2. Real homogeneity: $f(rx)=rf(x)$ for all $x\in X$ and all real $r.$ • This property is used in the definition of a real linear functional. 3. Homogeneity:[3] $f(sx)=sf(x)$ for all $x\in X$ and all scalars $s\in \mathbb {F} .$ • It is emphasized that this definition depends on the scalar field $\mathbb {F} $ underlying the domain $X.$ • This property is used in the definition of linear functionals and linear maps.[2] 4. Conjugate homogeneity:[4] $f(sx)={\overline {s}}f(x)$ for all $x\in X$ and all scalars $s\in \mathbb {F} .$ • If $\mathbb {F} =\mathbb {C} $ then ${\overline {s}}$ typically denotes the complex conjugate of $s$. But more generally, as with semilinear maps for example, ${\overline {s}}$ could be the image of $s$ under some distinguished automorphism of $\mathbb {F} .$ • Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space). All of the above definitions can be generalized by replacing the condition $f(rx)=rf(x)$ with $f(rx)=|r|f(x),$ in which case that definition is prefixed with the word "absolute" or "absolutely." For example, 1. Absolute homogeneity:[2] $f(sx)=|s|f(x)$ for all $x\in X$ and all scalars $s\in \mathbb {F} .$ • This property is used in the definition of a seminorm and a norm. If $k$ is a fixed real number then the above definitions can be further generalized by replacing the condition $f(rx)=rf(x)$ with $f(rx)=r^{k}f(x)$ (and similarly, by replacing $f(rx)=|r|f(x)$ with $f(rx)=|r|^{k}f(x)$ for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree $k$" (where in particular, all of the above definitions are "of degree $1$"). For instance, 1. Real homogeneity of degree $k$: $f(rx)=r^{k}f(x)$ for all $x\in X$ and all real $r.$ 2. Homogeneity of degree $k$: $f(sx)=s^{k}f(x)$ for all $x\in X$ and all scalars $s\in \mathbb {F} .$ 3. Absolute real homogeneity of degree $k$: $f(rx)=|r|^{k}f(x)$ for all $x\in X$ and all real $r.$ 4. Absolute homogeneity of degree $k$: $f(sx)=|s|^{k}f(x)$ for all $x\in X$ and all scalars $s\in \mathbb {F} .$ A nonzero continuous function that is homogeneous of degree $k$ on $\mathbb {R} ^{n}\backslash \lbrace 0\rbrace $ extends continuously to $\mathbb {R} ^{n}$ if and only if $k>0.$ See also • Homogeneous space • Triangle center function – Point in a triangle that can be seen as its middle under some criteriaPages displaying short descriptions of redirect targets Notes 1. However, if such an $f$ satisfies $f(rx)=rf(x)$ for all $r>0$ and $x\in X,$ then necessarily $f(0)\in \{\pm \infty ,0\}$ and whenever $f(0),f(x)\in \mathbb {R} $ are both real then $f(rx)=rf(x)$ will hold for all $r\geq 0.$ Proofs 1. Assume that $f$ is strictly positively homogeneous and valued in a vector space or a field. Then $f(0)=f(2\cdot 0)=2f(0)$ so subtracting $f(0)$ from both sides shows that $f(0)=0.$ Writing $r:=0,$ then for any $x\in X,$ $f(rx)=f(0)=0=0f(x)=rf(x),$ which shows that $f$ is nonnegative homogeneous. References 1. Schechter 1996, pp. 313–314. 2. Kubrusly 2011, p. 200. 3. Kubrusly 2011, p. 55. 4. Kubrusly 2011, p. 310. Sources • Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (2nd ed.) (in German). Springer Verlag. p. 188. ISBN 3-540-09484-9. • Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser Basel. ISBN 978-0-8176-4998-2. OCLC 710154895. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. External links • "Homogeneous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Eric Weisstein. "Euler's Homogeneous Function Theorem". MathWorld.
Wikipedia
Subset In mathematics, set A is a subset of a set B if all elements of A are also elements of B; B is then a superset of A. It is possible for A and B to be equal; if they are unequal, then A is a proper subset of B. The relationship of one set being a subset of another is called inclusion (or sometimes containment). A is a subset of B may also be expressed as B includes (or contains) A or A is included (or contained) in B. A k-subset is a subset with k elements. The subset relation defines a partial order on sets. In fact, the subsets of a given set form a Boolean algebra under the subset relation, in which the join and meet are given by intersection and union, and the subset relation itself is the Boolean inclusion relation. Definition If A and B are sets and every element of A is also an element of B, then: • A is a subset of B, denoted by $A\subseteq B$, or equivalently, • B is a superset of A, denoted by $B\supseteq A.$ If A is a subset of B, but A is not equal to B (i.e. there exists at least one element of B which is not an element of A), then: • A is a proper (or strict) subset of B, denoted by $A\subsetneq B$, or equivalently, • B is a proper (or strict) superset of A, denoted by $B\supsetneq A$. The empty set, written $\{\}$ or $\varnothing ,$ is a subset of any set X and a proper subset of any set except itself, the inclusion relation $\subseteq $ is a partial order on the set ${\mathcal {P}}(S)$ (the power set of S—the set of all subsets of S[1]) defined by $A\leq B\iff A\subseteq B$. We may also partially order ${\mathcal {P}}(S)$ by reverse set inclusion by defining $A\leq B{\text{ if and only if }}B\subseteq A.$ When quantified, $A\subseteq B$ is represented as $\forall x\left(x\in A\implies x\in B\right).$[2] We can prove the statement $A\subseteq B$ by applying a proof technique known as the element argument[3]: Let sets A and B be given. To prove that $A\subseteq B,$ 1. suppose that a is a particular but arbitrarily chosen element of A 2. show that a is an element of B. The validity of this technique can be seen as a consequence of Universal generalization: the technique shows $c\in A\implies c\in B$ for an arbitrarily chosen element c. Universal generalisation then implies $\forall x\left(x\in A\implies x\in B\right),$ which is equivalent to $A\subseteq B,$ as stated above. The set of all subsets of $A$ is called its powerset, and is denoted by ${\mathcal {P}}(A)$. The set of all $k$-subsets of $A$ is denoted by ${\tbinom {A}{k}}$, in analogue with the notation for binomial coefficients, which count the number of $k$-subsets of an $n$-element set. In set theory, the notation $[A]^{k}$ is also common, especially when $k$ is a transfinite cardinal number. Properties • A set A is a subset of B if and only if their intersection is equal to A. Formally: $A\subseteq B{\text{ if and only if }}A\cap B=A.$ • A set A is a subset of B if and only if their union is equal to B. Formally: $A\subseteq B{\text{ if and only if }}A\cup B=B.$ • A finite set A is a subset of B, if and only if the cardinality of their intersection is equal to the cardinality of A. Formally: $A\subseteq B{\text{ if and only if }}|A\cap B|=|A|.$ ⊂ and ⊃ symbols Some authors use the symbols $\subset $ and $\supset $ to indicate subset and superset respectively; that is, with the same meaning as and instead of the symbols $\subseteq $ and $\supseteq .$[4] For example, for these authors, it is true of every set A that $A\subset A.$ Other authors prefer to use the symbols $\subset $ and $\supset $ to indicate proper (also called strict) subset and proper superset respectively; that is, with the same meaning as and instead of the symbols $\subsetneq $ and $\supsetneq .$[5] This usage makes $\subseteq $ and $\subset $ analogous to the inequality symbols $\leq $ and $<.$ For example, if $x\leq y,$ then x may or may not equal y, but if $x<y,$ then x definitely does not equal y, and is less than y. Similarly, using the convention that $\subset $ is proper subset, if $A\subseteq B,$ then A may or may not equal B, but if $A\subset B,$ then A definitely does not equal B. Examples of subsets • The set A = {1, 2} is a proper subset of B = {1, 2, 3}, thus both expressions $A\subseteq B$ and $A\subsetneq B$ are true. • The set D = {1, 2, 3} is a subset (but not a proper subset) of E = {1, 2, 3}, thus $D\subseteq E$ is true, and $D\subsetneq E$ is not true (false). • Any set is a subset of itself, but not a proper subset. ($X\subseteq X$ is true, and $X\subsetneq X$ is false for any set X.) • The set {x: x is a prime number greater than 10} is a proper subset of {x: x is an odd number greater than 10} • The set of natural numbers is a proper subset of the set of rational numbers; likewise, the set of points in a line segment is a proper subset of the set of points in a line. These are two examples in which both the subset and the whole set are infinite, and the subset has the same cardinality (the concept that corresponds to size, that is, the number of elements, of a finite set) as the whole; such cases can run counter to one's initial intuition. • The set of rational numbers is a proper subset of the set of real numbers. In this example, both sets are infinite, but the latter set has a larger cardinality (or power) than the former set. Another example in an Euler diagram: • A is a proper subset of B. • C is a subset but not a proper subset of B. Other properties of inclusion Inclusion is the canonical partial order, in the sense that every partially ordered set $(X,\preceq )$ is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example: if each ordinal n is identified with the set $[n]$ of all ordinals less than or equal to n, then $a\leq b$ if and only if $[a]\subseteq [b].$ For the power set $\operatorname {\mathcal {P}} (S)$ of a set S, the inclusion partial order is—up to an order isomorphism—the Cartesian product of $k=|S|$ (the cardinality of S) copies of the partial order on $\{0,1\}$ for which $0<1.$ This can be illustrated by enumerating $S=\left\{s_{1},s_{2},\ldots ,s_{k}\right\},$, and associating with each subset $T\subseteq S$ (i.e., each element of $2^{S}$) the k-tuple from $\{0,1\}^{k},$ of which the ith coordinate is 1 if and only if $s_{i}$ is a member of T. See also • Convex subset – In geometry, set whose intersection with every line is a single line segmentPages displaying short descriptions of redirect targets • Inclusion order – Partial order that arises as the subset-inclusion relation on some collection of objects • Region – Connected open subset of a topological spacePages displaying short descriptions of redirect targets • Subset sum problem – Decision problem in computer science • Subsumptive containment – System of elements that are subordinated to each other • Total subset – Subset T of a topological vector space X where the linear span of T is a dense subset of X • Mereology – Study of parts and the wholes they form References 1. Weisstein, Eric W. "Subset". mathworld.wolfram.com. Retrieved 2020-08-23. 2. Rosen, Kenneth H. (2012). Discrete Mathematics and Its Applications (7th ed.). New York: McGraw-Hill. p. 119. ISBN 978-0-07-338309-5. 3. Epp, Susanna S. (2011). Discrete Mathematics with Applications (Fourth ed.). p. 337. ISBN 978-0-495-39132-6. 4. Rudin, Walter (1987), Real and complex analysis (3rd ed.), New York: McGraw-Hill, p. 6, ISBN 978-0-07-054234-1, MR 0924157 5. Subsets and Proper Subsets (PDF), archived from the original (PDF) on 2013-01-23, retrieved 2012-09-07 Bibliography • Jech, Thomas (2002). Set Theory. Springer-Verlag. ISBN 3-540-44085-2. External links • Media related to Subsets at Wikimedia Commons • Weisstein, Eric W. "Subset". MathWorld. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo Common logical symbols ∧  or  & and ∨ or ¬  or  ~ not → implies ⊃ implies, superset ↔  or  ≡ iff | nand ∀ universal quantification ∃ existential quantification ⊤ true, tautology ⊥ false, contradiction ⊢ entails, proves ⊨ entails, therefore ∴ therefore ∵ because  Philosophy portal  Mathematics portal
Wikipedia
Strict In mathematical writing, the term strict refers to the property of excluding equality and equivalence[1] and often occurs in the context of inequality and monotonic functions.[2] It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict, which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word "proper" can also be used as a mathematical synonym for "strict". Look up strict in Wiktionary, the free dictionary. Use This term is commonly used in the context of inequalities — the phrase "strictly less than" means "less than and not equal to" (likewise "strictly greater than" means "greater than and not equal to"). More generally, a strict partial order, strict total order, and strict weak order exclude equality and equivalence. When comparing numbers to zero, the phrases "strictly positive" and "strictly negative" mean "positive and not equal to zero" and "negative and not equal to zero", respectively. In the context of functions, the adverb "strictly" is used to modify the terms "monotonic", "increasing", and "decreasing". On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases "non-negative", "non-positive", "non-increasing", and "non-decreasing" to make it clear that the inclusive sense of the terms is being used. The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase "x is positive", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing "x is strictly positive" for x > 0, and "x is non-negative" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.) The word "proper" is often used in the same way as "strict". For example, a "proper subset" of a set S is a subset that is not equal to S itself, and a "proper class" is a class which is not also a set. See also • Strictly positive measure • Monotonic function • Mod about non-strict monotonic distribution. References 1. "Strict inequality". artofproblemsolving.com. Retrieved 2019-12-13.{{cite web}}: CS1 maint: url-status (link) 2. "Inequality Definition (Illustrated Mathematics Dictionary)". www.mathsisfun.com. Retrieved 2019-12-13. This article incorporates material from strict on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Henselian ring In mathematics, a Henselian ring (or Hensel ring) is a local ring in which Hensel's lemma holds. They were introduced by Azumaya (1951), who named them after Kurt Hensel. Azumaya originally allowed Henselian rings to be non-commutative, but most authors now restrict them to be commutative. Some standard references for Hensel rings are (Nagata 1975, Chapter VII), (Raynaud 1970), and (Grothendieck 1967, Chapter 18). Definitions In this article rings will be assumed to be commutative, though there is also a theory of non-commutative Henselian rings. • A local ring R with maximal ideal m is called Henselian if Hensel's lemma holds. This means that if P is a monic polynomial in R[x], then any factorization of its image P in (R/m)[x] into a product of coprime monic polynomials can be lifted to a factorization in R[x]. • A local ring is Henselian if and only if every finite ring extension is a product of local rings. • A Henselian local ring is called strictly Henselian if its residue field is separably closed. • By abuse of terminology, a field $K$ with valuation $v$ is said to be Henselian if its valuation ring is Henselian. That is the case if and only if $v$ extends uniquely to every finite extension of $K$ (resp. to every finite separable extension of $K$, resp. to $K^{alg}$, resp. to $K^{sep}$). • A ring is called Henselian if it is a direct product of a finite number of Henselian local rings. Properties • Assume that $(K,v)$ is an Henselian field. Then every algebraic extension of $K$ is henselian (by the fourth definition above). • If $(K,v)$ is a Henselian field and $\alpha $ is algebraic over $K$, then for every conjugate $\alpha '$ of $\alpha $ over $K$, $v(\alpha ')=v(\alpha )$. This follows from the fourth definition, and from the fact that for every K-automorphism $\sigma $ of $K^{alg}$, $v\circ \sigma $ is an extension of $v|_{K}$. The converse of this assertion also holds, because for a normal field extension $L/K$, the extensions of $v$ to $L$ are known to be conjugated.[1] Henselian rings in algebraic geometry Henselian rings are the local rings with respect to the Nisnevich topology in the sense that if $R$ is a Henselian local ring, and $\{U_{i}\to X\}$ is a Nisnevich covering of $X=Spec(R)$, then one of the $U_{i}\to X$ is an isomorphism. This should be compared to the fact that for any Zariski open covering $\{U_{i}\to X\}$ of the spectrum $X=Spec(R)$ of a local ring $R$, one of the $U_{i}\to X$ is an isomorphism. In fact, this property characterises Henselian rings, resp. local rings. Likewise strict Henselian rings are the local rings of geometric points in the étale topology. Henselization For any local ring A there is a universal Henselian ring B generated by A, called the Henselization of A, introduced by Nagata (1953), such that any local homomorphism from A to a Henselian ring can be extended uniquely to B. The Henselization of A is unique up to unique isomorphism. The Henselization of A is an algebraic substitute for the completion of A. The Henselization of A has the same completion and residue field as A and is a flat module over A. If A is Noetherian, reduced, normal, regular, or excellent then so is its Henselization. For example, the Henselization of the ring of polynomials k[x,y,...] localized at the point (0,0,...) is the ring of algebraic formal power series (the formal power series satisfying an algebraic equation). This can be thought of as the "algebraic" part of the completion. Similarly there is a strictly Henselian ring generated by A, called the strict Henselization of A. The strict Henselization is not quite universal: it is unique, but only up to non-unique isomorphism. More precisely it depends on the choice of a separable algebraic closure of the residue field of A, and automorphisms of this separable algebraic closure correspond to automorphisms of the corresponding strict Henselization. For example, a strict Henselization of the field of p-adic numbers is given by the maximal unramified extension, generated by all roots of unity of order prime to p. It is not "universal" as it has non-trivial automorphisms. Examples • Every field is a Henselian local ring. (But not every field with valuation is "Henselian" in the sense of the fourth definition above.) • Complete Hausdorff local rings, such as the ring of p-adic integers and rings of formal power series over a field, are Henselian. • The rings of convergent power series over the real or complex numbers are Henselian. • Rings of algebraic power series over a field are Henselian. • A local ring that is integral over a Henselian ring is Henselian. • The Henselization of a local ring is a Henselian local ring. • Every quotient of a Henselian ring is Henselian. • A ring A is Henselian if and only if the associated reduced ring Ared is Henselian (this is the quotient of A by the ideal of nilpotent elements). • If A has only one prime ideal then it is Henselian since Ared is a field. References 1. A. J. Engler, A. Prestel, Valued fields, Springer monographs of mathematics, 2005, thm. 3.2.15, p. 69. • Azumaya, Gorô (1951), "On maximally central algebras.", Nagoya Mathematical Journal, 2: 119–150, doi:10.1017/s0027763000010114, ISSN 0027-7630, MR 0040287 • Danilov, V. I. (2001) [1994], "Hensel ring", Encyclopedia of Mathematics, EMS Press • Grothendieck, Alexandre (1967), "Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie", Publications Mathématiques de l'IHÉS, 32: 5–361, doi:10.1007/BF02732123 • Kurke, H.; Pfister, G.; Roczen, M. (1975), Henselsche Ringe und algebraische Geometrie, Mathematische Monographien, vol. II, Berlin: VEB Deutscher Verlag der Wissenschaften, MR 0491694 • Nagata, Masayoshi (1953), "On the theory of Henselian rings", Nagoya Mathematical Journal, 5: 45–57, doi:10.1017/s0027763000015439, ISSN 0027-7630, MR 0051821 • Nagata, Masayoshi (1954), "On the theory of Henselian rings. II", Nagoya Mathematical Journal, 7: 1–19, doi:10.1017/s002776300001802x, ISSN 0027-7630, MR 0067865 • Nagata, Masayoshi (1959), "On the theory of Henselian rings. III", Memoirs of the College of Science, University of Kyoto. Series A: Mathematics, 32: 93–101, doi:10.1215/kjm/1250776700, MR 0109835 • Nagata, Masayoshi (1975) [1962], Local rings, Interscience Tracts in Pure and Applied Mathematics, vol. 13 (reprint ed.), New York-London: Interscience Publishers a division of John Wiley & Sons, pp. xiii+234, ISBN 978-0-88275-228-0, MR 0155856 • Raynaud, Michel (1970), Anneaux locaux henséliens, Lecture Notes in Mathematics, vol. 169, Berlin-New York: Springer-Verlag, pp. v+129, doi:10.1007/BFb0069571, ISBN 978-3-540-05283-8, MR 0277519
Wikipedia
Strictly convex space In mathematics, a strictly convex space is a normed vector space (X, || ||) for which the closed unit ball is a strictly convex set. Put another way, a strictly convex space is one for which, given any two distinct points x and y on the unit sphere ∂B (i.e. the boundary of the unit ball B of X), the segment joining x and y meets ∂B only at x and y. Strict convexity is somewhere between an inner product space (all inner product spaces being strictly convex) and a general normed space in terms of structure. It also guarantees the uniqueness of a best approximation to an element in X (strictly convex) out of a convex subspace Y, provided that such an approximation exists. If the normed space X is complete and satisfies the slightly stronger property of being uniformly convex (which implies strict convexity), then it is also reflexive by Milman-Pettis theorem. Properties The following properties are equivalent to strict convexity. • A normed vector space (X, || ||) is strictly convex if and only if x ≠ y and || x || = || y || = 1 together imply that || x + y || < 2. • A normed vector space (X, || ||) is strictly convex if and only if x ≠ y and || x || = || y || = 1 together imply that || αx + (1 − α)y || < 1 for all 0 < α < 1. • A normed vector space (X, || ||) is strictly convex if and only if x ≠ 0 and y ≠ 0 and || x + y || = || x || + || y || together imply that x = cy for some constant c > 0; • A normed vector space (X, || ||) is strictly convex if and only if the modulus of convexity δ for (X, || ||) satisfies δ(2) = 1. See also • Uniformly convex space • Modulus and characteristic of convexity References • Goebel, Kazimierz (1970). "Convexity of balls and fixed-point theorems for mappings with nonexpansive square". Compositio Mathematica. 22 (3): 269–274. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Strictly positive measure In mathematics, strict positivity is a concept in measure theory. Intuitively, a strictly positive measure is one that is "nowhere zero", or that is zero "only on points". Definition Let $(X,T)$ be a Hausdorff topological space and let $\Sigma $ be a $\sigma $-algebra on $X$ that contains the topology $T$ (so that every open set is a measurable set, and $\Sigma $ is at least as fine as the Borel $\sigma $-algebra on $X$). Then a measure $\mu $ on $(X,\Sigma )$ is called strictly positive if every non-empty open subset of $X$ has strictly positive measure. More concisely, $\mu $ is strictly positive if and only if for all $U\in T$ such that $U\neq \varnothing ,\mu (U)>0.$ Examples • Counting measure on any set $X$ (with any topology) is strictly positive. • Dirac measure is usually not strictly positive unless the topology $T$ is particularly "coarse" (contains "few" sets). For example, $\delta _{0}$ on the real line $\mathbb {R} $ with its usual Borel topology and $\sigma $-algebra is not strictly positive; however, if $\mathbb {R} $ is equipped with the trivial topology $T=\{\varnothing ,\mathbb {R} \},$ then $\delta _{0}$ is strictly positive. This example illustrates the importance of the topology in determining strict positivity. • Gaussian measure on Euclidean space $\mathbb {R} ^{n}$ (with its Borel topology and $\sigma $-algebra) is strictly positive. • Wiener measure on the space of continuous paths in $\mathbb {R} ^{n}$ is a strictly positive measure — Wiener measure is an example of a Gaussian measure on an infinite-dimensional space. • Lebesgue measure on $\mathbb {R} ^{n}$ (with its Borel topology and $\sigma $-algebra) is strictly positive. • The trivial measure is never strictly positive, regardless of the space $X$ or the topology used, except when $X$ is empty. Properties • If $\mu $ and $\nu $ are two measures on a measurable topological space $(X,\Sigma ),$ with $\mu $ strictly positive and also absolutely continuous with respect to $\nu ,$ then $\nu $ is strictly positive as well. The proof is simple: let $U\subseteq X$ be an arbitrary open set; since $\mu $ is strictly positive, $\mu (U)>0;$ by absolute continuity, $\nu (U)>0$ as well. • Hence, strict positivity is an invariant with respect to equivalence of measures. See also • Support (measure theory) – given a Borel measure, the set of those points whose neighbourhoods always have positive measurePages displaying wikidata descriptions as a fallback − a measure is strictly positive if and only if its support is the whole space. References Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
Strictly simple group In mathematics, in the field of group theory, a group is said to be strictly simple if it has no proper nontrivial ascendant subgroups. That is, $G$ is a strictly simple group if the only ascendant subgroups of $G$ are $\{e\}$ (the trivial subgroup), and $G$ itself (the whole group). In the finite case, a group is strictly simple if and only if it is simple. However, in the infinite case, strictly simple is a stronger property than simple. See also • Serial subgroup • Absolutely simple group References Simple Group Encyclopedia of Mathematics, retrieved 1 January 2012
Wikipedia
Strictly singular operator In functional analysis, a branch of mathematics, a strictly singular operator is a bounded linear operator between normed spaces which is not bounded below on any infinite-dimensional subspace. Definitions. Let X and Y be normed linear spaces, and denote by B(X,Y) the space of bounded operators of the form $T:X\to Y$. Let $A\subseteq X$ be any subset. We say that T is bounded below on $A$ whenever there is a constant $c\in (0,\infty )$ such that for all $x\in A$, the inequality $\|Tx\|\geq c\|x\|$ holds. If A=X, we say simply that T is bounded below. Now suppose X and Y are Banach spaces, and let $Id_{X}\in B(X)$ and $Id_{Y}\in B(Y)$ denote the respective identity operators. An operator $T\in B(X,Y)$ is called inessential whenever $Id_{X}-ST$ is a Fredholm operator for every $S\in B(Y,X)$. Equivalently, T is inessential if and only if $Id_{Y}-TS$ is Fredholm for every $S\in B(Y,X)$. Denote by ${\mathcal {E}}(X,Y)$ the set of all inessential operators in $B(X,Y)$. An operator $T\in B(X,Y)$ is called strictly singular whenever it fails to be bounded below on any infinite-dimensional subspace of X. Denote by ${\mathcal {SS}}(X,Y)$ the set of all strictly singular operators in $B(X,Y)$. We say that $T\in B(X,Y)$ is finitely strictly singular whenever for each $\epsilon >0$ there exists $n\in \mathbb {N} $ such that for every subspace E of X satisfying ${\text{dim}}(E)\geq n$, there is $x\in E$ such that $\|Tx\|<\epsilon \|x\|$. Denote by ${\mathcal {FSS}}(X,Y)$ the set of all finitely strictly singular operators in $B(X,Y)$. Let $B_{X}=\{x\in X:\|x\|\leq 1\}$ denote the closed unit ball in X. An operator $T\in B(X,Y)$ is compact whenever $TB_{X}=\{Tx:x\in B_{X}\}$ is a relatively norm-compact subset of Y, and denote by ${\mathcal {K}}(X,Y)$ the set of all such compact operators. Properties. Strictly singular operators can be viewed as a generalization of compact operators, as every compact operator is strictly singular. These two classes share some important properties. For example, if X is a Banach space and T is a strictly singular operator in B(X) then its spectrum $\sigma (T)$ satisfies the following properties: (i) the cardinality of $\sigma (T)$ is at most countable; (ii) $0\in \sigma (T)$ (except possibly in the trivial case where X is finite-dimensional); (iii) zero is the only possible limit point of $\sigma (T)$; and (iv) every nonzero $\lambda \in \sigma (T)$ is an eigenvalue. This same "spectral theorem" consisting of (i)-(iv) is satisfied for inessential operators in B(X). Classes ${\mathcal {K}}$, ${\mathcal {FSS}}$, ${\mathcal {SS}}$, and ${\mathcal {E}}$ all form norm-closed operator ideals. This means, whenever X and Y are Banach spaces, the component spaces ${\mathcal {K}}(X,Y)$, ${\mathcal {FSS}}(X,Y)$, ${\mathcal {SS}}(X,Y)$, and ${\mathcal {E}}(X,Y)$ are each closed subspaces (in the operator norm) of B(X,Y), such that the classes are invariant under composition with arbitrary bounded linear operators. In general, we have ${\mathcal {K}}(X,Y)\subset {\mathcal {FSS}}(X,Y)\subset {\mathcal {SS}}(X,Y)\subset {\mathcal {E}}(X,Y)$, and each of the inclusions may or may not be strict, depending on the choices of X and Y. Examples. Every bounded linear map $T:\ell _{p}\to \ell _{q}$, for $1\leq q,p<\infty $, $p\neq q$, is strictly singular. Here, $\ell _{p}$ and $\ell _{q}$ are sequence spaces. Similarly, every bounded linear map $T:c_{0}\to \ell _{p}$ and $T:\ell _{p}\to c_{0}$, for $1\leq p<\infty $, is strictly singular. Here $c_{0}$ is the Banach space of sequences converging to zero. This is a corollary of Pitt's theorem, which states that such T, for q < p, are compact. If $1\leq p<q<\infty $ then the formal identity operator $I_{p,q}\in B(\ell _{p},\ell _{q})$ is finitely strictly singular but not compact. If $1<p<q<\infty $ then there exist "Pelczynski operators" in $B(\ell _{p},\ell _{q})$ which are uniformly bounded below on copies of $\ell _{2}^{n}$, $n\in \mathbb {N} $, and hence are strictly singular but not finitely strictly singular. In this case we have ${\mathcal {K}}(\ell _{p},\ell _{q})\subsetneq {\mathcal {FSS}}(\ell _{p},\ell _{q})\subsetneq {\mathcal {SS}}(\ell _{p},\ell _{q})$. However, every inessential operator with codomain $\ell _{q}$ is strictly singular, so that ${\mathcal {SS}}(\ell _{p},\ell _{q})={\mathcal {E}}(\ell _{p},\ell _{q})$. On the other hand, if X is any separable Banach space then there exists a bounded below operator $T\in B(X,\ell _{\infty })$ any of which is inessential but not strictly singular. Thus, in particular, ${\mathcal {K}}(\ell _{p},\ell _{\infty })\subsetneq {\mathcal {FSS}}(\ell _{p},\ell _{\infty })\subsetneq {\mathcal {SS}}(\ell _{p},\ell _{\infty })\subsetneq {\mathcal {E}}(\ell _{p},\ell _{\infty })$ for all $1<p<\infty $. Duality. The compact operators form a symmetric ideal, which means $T\in {\mathcal {K}}(X,Y)$ if and only if $T^{*}\in {\mathcal {K}}(Y^{*},X^{*})$. However, this is not the case for classes ${\mathcal {FSS}}$, ${\mathcal {SS}}$, or ${\mathcal {E}}$. To establish duality relations, we will introduce additional classes. If Z is a closed subspace of a Banach space Y then there exists a "canonical" surjection $Q_{Z}:Y\to Y/Z$ defined via the natural mapping $y\mapsto y+Z$. An operator $T\in B(X,Y)$ is called strictly cosingular whenever given an infinite-dimensional closed subspace Z of Y, the map $Q_{Z}T$ fails to be surjective. Denote by ${\mathcal {SCS}}(X,Y)$ the subspace of strictly cosingular operators in B(X,Y). Theorem 1. Let X and Y be Banach spaces, and let $T\in B(X,Y)$. If T* is strictly singular (resp. strictly cosingular) then T is strictly cosingular (resp. strictly singular). Note that there are examples of strictly singular operators whose adjoints are neither strictly singular nor strictly cosingular (see Plichko, 2004). Similarly, there are strictly cosingular operators whose adjoints are not strictly singular, e.g. the inclusion map $I:c_{0}\to \ell _{\infty }$. So ${\mathcal {SS}}$ is not in full duality with ${\mathcal {SCS}}$. Theorem 2. Let X and Y be Banach spaces, and let $T\in B(X,Y)$. If T* is inessential then so is T. References Aiena, Pietro, Fredholm and Local Spectral Theory, with Applications to Multipliers (2004), ISBN 1-4020-1830-4. Plichko, Anatolij, "Superstrictly Singular and Superstrictly Cosingular Operators," North-Holland Mathematics Studies 197 (2004), pp239-255. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
String diagram String diagrams are a formal graphical language for representing morphisms in monoidal categories, or more generally 2-cells in 2-categories. They are a prominent tool in applied category theory. When interpreted in the monoidal category of vector spaces and linear maps with the tensor product, string diagrams are called tensor networks or Penrose graphical notation. This has led to the development of categorical quantum mechanics where the axioms of quantum theory are expressed in the language of monoidal categories. History Günter Hotz gave the first mathematical definition of string diagrams in order to formalise electronic circuits.[1] However, the invention of string diagrams is usually credited to Roger Penrose,[2] with Feynman diagrams also described as a precursor.[3] They were later characterised as the arrows of free monoidal categories in a seminal article by André Joyal and Ross Street.[4] While the diagrams in these first articles were hand-drawn, the advent of typesetting software such as LaTeX and PGF/TikZ made the publication of string diagrams more wide-spread.[5] The existential graphs and diagrammatic reasoning of Charles Sanders Peirce are arguably the oldest form of string diagrams, they are interpreted in the monoidal category of finite sets and relations with the Cartesian product.[6] The lines of identity of Peirce's existential graphs can be axiomatised as a Frobenius algebra, the cuts are unary operators on homsets that axiomatise logical negation. This makes string diagrams a sound and complete two-dimensional deduction system for first-order logic,[7] invented independently from the one-dimensional syntax of Gottlob Frege's Begriffsschrift. Intuition String diagrams are made of boxes $f:x\to y$, which represent processes, with a list of wires $x$ coming in at the top and $y$ at the bottom, which represent the input and output systems being processed by the box $f$. Starting from a collection of wires and boxes, called a signature, one may generate the set of all string diagrams by induction: • each box $f:x\to y$ is a string diagram, • for each list of wires $x$, the identity ${\text{id}}(x):x\to x$ is a string diagram representing the process which does nothing to its input system, it is drawn as a bunch of parallel wires, • for each pair of string diagrams $f:x\to y$ and $f':x'\to y'$, their tensor $f\otimes f':xx'\to yy'$ is a string diagram representing the parallel composition of processes, it is drawn as the horizontal concatenation of the two diagrams, • for each pair of string diagrams $f:x\to y$ and $g:y\to z$, their composition $g\circ f:x\to z$ is a string diagram representing the sequential composition of processes, it is drawn as the vertical concatenation of the two diagrams. Definition Algebraic Let the Kleene star $X^{\star }$ denote the free monoid, i.e. the set of lists with elements in a set $X$. A monoidal signature $\Sigma $ is given by: • a set $\Sigma _{0}$ of generating objects, the lists of generating objects in $ \Sigma _{0}^{\star }$ are also called types, • a set $\Sigma _{1}$ of generating arrows, also called boxes, • a pair of functions ${\text{dom}},{\text{cod}}:\Sigma _{1}\to \Sigma _{0}^{\star }$ which assign a domain and codomain to each box, i.e. the input and output types. A morphism of monoidal signature $F:\Sigma \to \Sigma '$ is a pair of functions $F_{0}:\Sigma _{0}\to \Sigma '_{0}$ and $F_{1}:\Sigma _{1}\to \Sigma '_{1}$ which is compatible with the domain and codomain, i.e. such that ${\text{dom}}\circ F_{1}\ =\ F_{0}\circ {\text{dom}}$ and ${\text{cod}}\circ F_{1}\ =\ F_{0}\circ {\text{cod}}$. Thus we get the category $\mathbf {MonSig} $ of monoidal signatures and their morphisms. There is a forgetful functor $U:\mathbf {MonCat} \to \mathbf {MonSig} $ which sends a monoidal category to its underlying signature and a monoidal functor to its underlying morphism of signatures, i.e. it forgets the identity, composition and tensor. The free functor $C_{-}:\mathbf {MonSig} \to \mathbf {MonCat} $, i.e. the left adjoint to the forgetful functor, sends a monoidal signature $\Sigma $ to the free monoidal category $C_{\Sigma }$ it generates. String diagrams (with generators from $\Sigma $) are arrows in the free monoidal category $C_{\Sigma }$.[8] The interpretation in a monoidal category $D$ is a defined by a monoidal functor $F:C_{\Sigma }\to D$, which by freeness is uniquely determined by a morphism of monoidal signatures $F:\Sigma \to U(D)$. Intuitively, once the image of generating objects and arrows are given, the image of every diagram they generate is fixed. Geometric A topological graph, also called a one-dimensional cell complex, is a tuple $(\Gamma ,\Gamma _{0},\Gamma _{1})$ of a Hausdorff space $\Gamma $, a closed discrete subset $\Gamma _{0}\subseteq \Gamma $ of nodes and a set of connected components $\Gamma _{1}$ called edges, each homeomorphic to an open interval with boundary in $\Gamma _{0}$ and such that $ \Gamma -\Gamma _{0}=\coprod \Gamma _{1}$. A plane graph between two real numbers $a,b\in \mathbb {R} $ with $a<b$ is a finite topological graph embedded in $\mathbb {R} \times [a,b]$ such that every point $x\in \Gamma \ \cap \ \mathbb {R} \times \{a,b\}$ is also a node $x\in \Gamma _{0}$ and belongs to the closure of exactly one edge in $\Gamma _{1}$. Such points are called outer nodes, they define the domain and codomain ${\text{dom}}(\Gamma ),{\text{cod}}(\Gamma )\in \Gamma _{1}^{\star }$ of the string diagram, i.e. the list of edges that are connected to the top and bottom boundary. The other nodes $f\in \Gamma _{0}\ -\ \{a,b\}\times \mathbb {R} $ are called inner nodes. A plane graph is progressive, also called recumbent, when the vertical projection $e\to [a,b]$ is injective for every edge $e\in \Gamma _{1}$. Intuitively, the edges in a progressive plane graph go from top to bottom without bending backward. In that case, each edge can be given a top-to-bottom orientation with designated nodes as source and target. One can then define the domain and codomain ${\text{dom}}(f),{\text{cod}}(f)\in \Gamma _{1}^{\star }$ of each inner node $f$, given by the list of edges that have source and target. A plane graph is generic when the vertical projection $\Gamma _{0}-\{a,b\}\times \mathbb {R} \to [a,b]$ is injective, i.e. no two inner nodes are at the same height. In that case, one can define a list ${\text{boxes}}(\Gamma )\in \Gamma _{0}^{\star }$ of the inner nodes ordered from top to bottom. A progressive plane graph is labeled by a monoidal signature $\Sigma $ if it comes equipped with a pair of functions $v_{0}:\Gamma _{1}\to \Sigma _{0}$ from edges to generating objects and $v_{1}:\Gamma _{0}-\{a,b\}\times \mathbb {R} \to \Sigma _{1}$ from inner nodes to generating arrows, in a way compatible with domain and codomain. A deformation of plane graphs is a continuous map $h:\Gamma \times [0,1]\to [a,b]\times \mathbb {R} $ such that • the image of $h(-,t)$ defines a plane graph for all $t\in [0,1]$, • for all $x\in \Gamma _{0}$, if $h(x,t)$ is an inner node for some $t$ it is inner for all $t\in [0,1]$. A deformation is progressive (generic, labeled) if $h(-,t)$ is progressive (generic, labeled) for all $t\in [0,1]$. Deformations induce an equivalence relation with $\Gamma \sim \Gamma '$ if and only if there is some $h$ with $h(-,0)=\Gamma $ and $h(-,1)=\Gamma '$. String diagrams are equivalence classes of labeled progressive plane graphs. Indeed, one can define: • the identity diagram ${\text{id}}(x)$ as a set of parallel edges labeled by some type $x\in \Sigma _{0}^{\star }$, • the composition of two diagrams as their vertical concatenation with the codomain of the first identified with the domain of the second, • the tensor of two diagrams as their horizontal concatenation. Combinatorial While the geometric definition makes explicit the link between category theory and low-dimensional topology, a combinatorial definition is necessary to formalise string diagrams in computer algebra systems and use them to define computational problems. One such definition is to define string diagrams as equivalence classes of well-typed formulae generated by the signature, identity, composition and tensor. In practice, it is more convenient to encode string diagrams as formulae in generic form, which are in bijection with the labeled generic progressive plane graphs defined above. Fix a monoidal signature $\Sigma $. A layer is defined as a triple $(x,f,y)\in \Sigma _{0}^{\star }\times \Sigma _{1}\times \Sigma _{0}^{\star }=:L(\Sigma )$ of a type $x$ on the left, a box $f$ in the middle and a type $y$ on the right. Layers have a domain and codomain ${\text{dom}},{\text{cod}}:L(\Sigma )\to \Sigma _{0}^{\star }$ defined in the obvious way. This forms a directed multigraph, also known as a quiver, with the types as vertices and the layers as edges. A string diagram $d$ is encoded as a path in this multigraph, i.e. it is given by: • a domain ${\text{dom}}(d)\in \Sigma _{0}^{\star }$ as starting point • a length ${\text{len}}(d)=n\geq 0$, • a list of ${\text{layers}}(d)=d_{1}\dots d_{n}\in L(\Sigma )$ such that ${\text{dom}}(d_{1})={\text{dom}}(d)$ and ${\text{cod}}(d_{i})={\text{dom}}(d_{i+1})$ for all $i<n$. In fact, the explicit list of layers is redundant, it is enough to specify the length of the type to the left of each layer, known as the offset. The whiskering $d\otimes z$ of a diagram $d=(x_{1},f_{1},y_{1})\dots (x_{n},f_{n},y_{n})$ by a type $z$ is defined as the concatenation to the right of each layer $d\otimes z=(x_{1},f_{1},y_{1}z)\dots (x_{n},f_{n},y_{n}z)$ and symmetrically for the whiskering $z\otimes d$ on the left. One can then define: • the identity diagram ${\text{id}}(x)$ with ${\text{len}}({\text{id}}(x))=0$ and ${\text{dom}}({\text{id}}(x))=x$, • the composition of two diagrams as the concatenation of their list of layers, • the tensor of two diagrams as the composition of whiskerings $d\otimes d'=d\otimes {\text{dom}}(d')\ \circ \ {\text{cod}}(d)\otimes d'$. Note that because the diagram is in generic form (i.e. each layer contains exactly one box) the definition of tensor is necessarily biased: the diagram on the left hand-side comes above the one on the right-hand side. One could have chosen the opposite definition $ d\otimes d'={\text{dom}}(d)\otimes d'\ \circ \ d\otimes {\text{cod}}(d')$. Two diagrams are equal (up to the axioms of monoidal categories) whenever they are in the same equivalence class of the congruence relation generated by the interchanger: $d\otimes {\text{dom}}(d')\ \circ \ {\text{cod}}(d)\otimes d'\quad =\quad {\text{dom}}(d)\otimes d'\ \circ \ d\otimes {\text{cod}}(d')$ That is, if the boxes in two consecutive layers are not connected then their order can be swapped. Intuitively, if there is no communication between two parallel processes then the order in which they happen is irrelevant. The word problem for free monoidal categories, i.e. deciding whether two given diagrams are equal, can be solved in polynomial time. The interchanger is a confluent rewriting system on the subset of boundary connected diagrams, i.e. whenever the plane graphs have no more than one connected component which is not connected to the domain or codomain and the Eckmann–Hilton argument does not apply.[9] Extension to 2-categories The idea is to represent structures of dimension d by structures of dimension 2-d, using Poincaré duality. Thus, • an object is represented by a portion of plane, • a 1-cell $f:A\to B$ is represented by a vertical segment—called a string—separating the plane in two (the right part corresponding to A and the left one to B), • a 2-cell $\alpha :f\Rightarrow g:A\to B$ is represented by an intersection of strings (the strings corresponding to f above the link, the strings corresponding to g below the link). The parallel composition of 2-cells corresponds to the horizontal juxtaposition of diagrams and the sequential composition to the vertical juxtaposition of diagrams. Duality between commutative diagrams (on the left hand side) and string diagrams (on the right hand side) A monoidal category is equivalent to a 2-category with a single 0-cell. Intuitively, going from monoidal categories to 2-categories amounts to adding colours to the background of string diagrams. Examples The snake equation Consider an adjunction $(F,G,\eta ,\varepsilon )$ between two categories ${\mathcal {C}}$ and ${\mathcal {D}}$ where $F:{\mathcal {C}}\leftarrow {\mathcal {D}}$ is left adjoint of $G:{\mathcal {C}}\rightarrow {\mathcal {D}}$ and the natural transformations $\eta :I\rightarrow GF$ and $\varepsilon :FG\rightarrow I$ are respectively the unit and the counit. The string diagrams corresponding to these natural transformations are: String diagram of the unit String diagram of the counit String diagram of the identity The string corresponding to the identity functor is drawn as a dotted line and can be omitted. The definition of an adjunction requires the following equalities: ${\begin{aligned}(\varepsilon F)\circ F(\eta )&=1_{F}\\G(\varepsilon )\circ (\eta G)&=1_{G}\end{aligned}}$ The first one is depicted as Diagrammatic representation of the equality $(\varepsilon F)\circ F(\eta )=1_{F}$ A monoidal category where every object has a left and right adjoint is called a rigid category. String diagrams for rigid categories can be defined as non-progressive plane graphs, i.e. the edges can bend backward. In the context of categorical quantum mechanics, this is known as the snake equation. The category of Hilbert spaces is rigid, this fact underlies the proof of correctness for the quantum teleportation protocol. The unit and counit of the adjunction are an abstraction of the Bell state and the Bell measurement respectively. If Alice and Bob share two qubits Y and Z in an entangled state and Alice performs a (post-selected) entangled measurement between Y and another qubit X, then this qubit X will be teleported from Alice to Bob: quantum teleportation is an identity morphism. An illustration of the diagrammatic calculus: the quantum teleportation protocol as modeled in categorical quantum mechanics. The same equation appears in the definition of pregroup grammars where it captures the notion of information flow in natural language semantics. This observation has led to the development of the DisCoCat framework and quantum natural language processing. Hierarchy of graphical languages Many extensions of string diagrams have been introduced to represent arrows in monoidal categories with extra structure, forming a hierarchy of graphical languages which is classified in Selinger's Survey of graphical languages for monoidal categories.[10] • Braided monoidal categories with 3-dimensional diagrams, a generalisation of braid groups. • Symmetric monoidal categories with 4-dimensional diagrams where edges can cross, a generalisation of the symmetric group. • Ribbon categories with 3-dimensional diagrams where the edges are undirected, a generalisation of knot diagrams. • Compact closed categories with 4-dimensional diagrams where the edges are undirected, a generalisation of Penrose graphical notation. • Dagger categories where every diagram has a horizontal reflection. List of applications String diagrams have been used to formalise the following objects of study. • Concurrency theory[11] • Artificial neural networks[12] • Game theory[13] • Bayesian probability[14] • Consciousness[15] • Markov kernels[16] • Signal-flow graphs[17] • Conjunctive queries[18] • Bidirectional transformations[19] • Categorical quantum mechanics • Quantum circuits, measurement-based quantum computing and quantum error correction, see ZX-calculus • Natural language processing, see DisCoCat • Quantum natural language processing See also • Proof nets, a generalisation of string diagrams used to denote proofs in linear logic • Existential graphs, a precursor of string diagrams used to denote formulae in first-order logic • Penrose graphical notation and Feynman diagrams, two precursors of string diagrams in physics • Tensor networks, the interpretation of string diagrams in vector spaces, linear maps and tensor product References 1. Hotz, Günter (1965). "Eine Algebraisierung des Syntheseproblems von Schaltkreisen I.". Elektronische Informationsverarbeitung und Kybernetik. 1 (3): 185–205. 2. Penrose, Roger (1971). "Applications of negative dimensional tensors". Combinatorial Mathematics and Its Applications. 1: 221–244. 3. Baez, J.; Stay, M. (2011), Coecke, Bob (ed.), "Physics, Topology, Logic and Computation: A Rosetta Stone", New Structures for Physics, Berlin, Heidelberg: Springer, vol. 813, pp. 95–172, arXiv:0903.0340, Bibcode:2011LNP...813...95B, doi:10.1007/978-3-642-12821-9_2, ISBN 978-3-642-12821-9, S2CID 115169297, retrieved 2022-11-08 4. Joyal, André; Street, Ross (1991). "The geometry of tensor calculus, I". Advances in Mathematics. 88 (1): 55–112. doi:10.1016/0001-8708(91)90003-P. 5. "Categories: History of string diagrams (thread, 2017may02-...)". angg.twu.net. Retrieved 2022-11-11. 6. Brady, Geraldine; Trimble, Todd H (2000). "A categorical interpretation of CS Peirce's propositional logic Alpha". Journal of Pure and Applied Algebra. 149 (3): 213–239. doi:10.1016/S0022-4049(98)00179-0. 7. Haydon, Nathan; Sobociński, Pawe\l (2020). "Compositional diagrammatic first-order logic". International Conference on Theory and Application of Diagrams. Springer: 402–418. 8. Joyal, André; Street, Ross (1988). "Planar diagrams and tensor algebra". Unpublished Manuscript, Available from Ross Street's Website. 9. Vicary, Jamie; Delpeuch, Antonin (2022). "Normalization for planar string diagrams and a quadratic equivalence algorithm". Logical Methods in Computer Science. 18. 10. Selinger, Peter (2010), "A survey of graphical languages for monoidal categories", New structures for physics, Springer, pp. 289–355, retrieved 2022-11-08 11. Abramsky, Samson (1996). "Retracing some paths in process algebra". International Conference on Concurrency Theory. Springer: 1–17. 12. Fong, Brendan; Spivak, David I.; Tuyéras, Rémy (2019-05-01). "Backprop as Functor: A compositional perspective on supervised learning". arXiv:1711.10455 [math.CT]. 13. Ghani, Neil; Hedges, Jules; Winschel, Viktor; Zahn, Philipp (2018). "Compositional Game Theory". Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science. pp. 472–481. doi:10.1145/3209108.3209165. ISBN 9781450355834. S2CID 17887510. 14. Coecke, Bob; Spekkens, Robert W (2012). "Picturing classical and quantum Bayesian inference". Synthese. 186 (3): 651–696. arXiv:1102.2368. doi:10.1007/s11229-011-9917-5. S2CID 3736082. 15. Signorelli, Camilo Miguel; Wang, Quanlong; Coecke, Bob (2021-10-01). "Reasoning about conscious experience with axiomatic and graphical mathematics". Consciousness and Cognition. 95: 103168. doi:10.1016/j.concog.2021.103168. ISSN 1053-8100. PMID 34627099. S2CID 235683270. 16. Fritz, Tobias (August 2020). "A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics". Advances in Mathematics. 370: 107239. arXiv:1908.07021. doi:10.1016/j.aim.2020.107239. S2CID 201103837. 17. Bonchi, Filippo; Sobociński, Pawel; Zanasi, Fabio (September 2014). "A Categorical Semantics of Signal Flow Graphs". CONCUR 2014 - Concurrency Theory - 25th International Conference. Lecture Notes in Computer Science. Rome, Italy. CONCUR 2014 - Concurrency Theory - 25th International Conference: 435–450. doi:10.1007/978-3-662-44584-6_30. ISBN 978-3-662-44583-9. S2CID 18492893. 18. Bonchi, Filippo; Seeber, Jens; Sobocinski, Pawel (2018-04-20). "Graphical Conjunctive Queries". arXiv:1804.07626 [cs.LO]. 19. Riley, Mitchell (2018). "Categories of optics". arXiv:1809.00738 [math.CT]. External links • TheCatsters (2007). String diagrams 1 (streamed video). Youtube. Archived from the original on 2021-12-19. • String diagrams at the nLab • DisCoPy, a Python toolkit for computing with string diagrams External links • Media related to String diagram at Wikimedia Commons Category theory Key concepts Key concepts • Category • Adjoint functors • CCC • Commutative diagram • Concrete category • End • Exponential • Functor • Kan extension • Morphism • Natural transformation • Universal property Universal constructions Limits • Terminal objects • Products • Equalizers • Kernels • Pullbacks • Inverse limit Colimits • Initial objects • Coproducts • Coequalizers • Cokernels and quotients • Pushout • Direct limit Algebraic categories • Sets • Relations • Magmas • Groups • Abelian groups • Rings (Fields) • Modules (Vector spaces) Constructions on categories • Free category • Functor category • Kleisli category • Opposite category • Quotient category • Product category • Comma category • Subcategory Higher category theory Key concepts • Categorification • Enriched category • Higher-dimensional algebra • Homotopy hypothesis • Model category • Simplex category • String diagram • Topos n-categories Weak n-categories • Bicategory (pseudofunctor) • Tricategory • Tetracategory • Kan complex • ∞-groupoid • ∞-topos Strict n-categories • 2-category (2-functor) • 3-category Categorified concepts • 2-group • 2-ring • En-ring • (Traced)(Symmetric) monoidal category • n-group • n-monoid • Category • Outline • Glossary
Wikipedia
String graph In graph theory, a string graph is an intersection graph of curves in the plane; each curve is called a "string". Given a graph G, G is a string graph if and only if there exists a set of curves, or strings, such that the graph having a vertex for each curve and an edge for each intersecting pair of curves is isomorphic to G. Background Seymour Benzer (1959) described a concept similar to string graphs as they applied to genetic structures. In that context, he also posed the specific case of intersecting intervals on a line, namely the now classical family of interval graphs. Later, Sinden (1966) specified the same idea to electrical networks and printed circuits. The mathematical study of string graphs began with the paper Ehrlich, Even & Tarjan (1976) and through a collaboration between Sinden and Ronald Graham, where the characterization of string graphs eventually came to be posed as an open question at the 5th Hungarian Colloquium on Combinatorics in 1976.[1] However, the recognition of string graphs was eventually proven to be NP-complete, implying that no simple characterization is likely to exist.[2] Related graph classes Every planar graph is a string graph:[3] one may form a string graph representation of an arbitrary plane-embedded graph by drawing a string for each vertex that loops around the vertex and around the midpoint of each adjacent edge, as shown in the figure. For any edge uv of the graph, the strings for u and v cross each other twice near the midpoint of uv, and there are no other crossings, so the pairs of strings that cross represent exactly the adjacent pairs of vertices of the original planar graph. Alternatively, by the circle packing theorem, any planar graph may be represented as a collection of circles, any two of which cross if and only if the corresponding vertices are adjacent; these circles (with a starting and ending point chosen to turn them into open curves) provide a string graph representation of the given planar graph. Chalopin, Gonçalves & Ochem (2007) proved that every planar graph has a string representation in which each pair of strings has at most one crossing point, unlike the representations described above. Scheinerman's conjecture, now proven, is the even stronger statement that every planar graph may be represented by the intersection graph of straight line segments, a very special case of strings. If every edge of a given graph G is subdivided, the resulting graph is a string graph if and only if G is planar. In particular, the subdivision of the complete graph K5 shown in the illustration is not a string graph, because K5 is not planar.[3] Every circle graph, as an intersection graph of line segments (the chords of a circle), is also a string graph. Every chordal graph may be represented as a string graph: chordal graphs are intersection graphs of subtrees of trees, and one may form a string representation of a chordal graph by forming a planar embedding of the corresponding tree and replacing each subtree by a string that traces around the subtree's edges. The complement graph of every comparability graph is also a string graph.[4] Other results Ehrlich, Even & Tarjan (1976) showed computing the chromatic number of string graphs to be NP-hard. Kratochvil (1991a) found that string graphs form an induced minor closed class, but not a minor closed class of graphs. Every m-edge string graph can be partitioned into two subsets, each a constant fraction the size of the whole graph, by the removal of O(m3/4log1/2m) vertices. It follows that the biclique-free string graphs, string graphs containing no Kt,t subgraph for some constant t, have O(n) edges and more strongly have polynomial expansion.[5] Notes 1. Graham (1976). 2. Kratochvil (1991b) showed string graph recognition to be NP-hard, but was not able to show that it could be solved in NP. After intermediate results by Schaefer & Štefankovič (2001) and Pach & Tóth (2002), Schaefer, Sedgwick & Štefankovič (2003) completed the proof that the problem is NP-complete. 3. Schaefer & Štefankovič (2001) credit this observation to Sinden (1966). 4. Golumbic, Rotem & Urrutia (1983) and Lovász (1983). See also Fox & Pach (2010). 5. Fox & Pach (2010); Dvořák & Norin (2015) harvtxt error: no target: CITEREFDvořákNorin2015 (help). References • Benzer, S. (1959), "On the topology of the genetic fine structure", Proceedings of the National Academy of Sciences of the United States of America, 45 (11): 1607–1620, Bibcode:1959PNAS...45.1607B, doi:10.1073/pnas.45.11.1607, PMC 222769, PMID 16590553. • Chalopin, J.; Gonçalves, D.; Ochem, P. (2007), "Planar graphs are in 1-STRING", Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM and SIAM, pp. 609–617. • Dvořák, Zdeněk; Norin, Sergey (2016), "Strongly sublinear separators and polynomial expansion", SIAM Journal on Discrete Mathematics, 30 (2): 1095–1101, arXiv:1504.04821, Bibcode:2015arXiv150404821D, doi:10.1137/15M1017569. • Ehrlich, G.; Even, S.; Tarjan, R. E. (1976), "Intersection graphs of curves in the plane", Journal of Combinatorial Theory, 21 (1): 8–20, doi:10.1016/0095-8956(76)90022-8. • Fox, Jacob; Pach, János (2010), "A separator theorem for string graphs and its applications", Combinatorics, Probability and Computing, 19 (3): 371, doi:10.1017/s0963548309990459, S2CID 5705145. • Golumbic, M.; Rotem, D.; Urrutia, J. (1983), "Comparability graphs and intersection graphs", Discrete Mathematics, 43 (1): 37–46, doi:10.1016/0012-365X(83)90019-5. • Graham, R. L. (1976), "Problem 1", Open Problems at 5th Hungarian Colloquium on Combinatorics. • Kratochvil, Jan (1991a), "String Graphs. I. The number of critical nonstring graphs is infinite", Journal of Combinatorial Theory, Series B, 52 (1): 53–66, doi:10.1016/0095-8956(91)90090-7. • Kratochvil, Jan (1991b), "String Graphs. II. Recognizing string graphs is NP-Hard", Journal of Combinatorial Theory, Series B, 52 (1): 67–78, doi:10.1016/0095-8956(91)90091-W. • Lovász, L. (1983), "Perfect graphs", Selected Topics in Graph Theory, vol. 2, London: Academic Press, pp. 55–87. • Pach, János; Tóth, Geza (2002), "Recognizing string graphs is decidable", Discrete & Computational Geometry, 28 (4): 593–606, doi:10.1007/s00454-002-2891-4. • Schaefer, Marcus; Štefankovič, Daniel (2001), "Decidability of string graphs", Proceedings of the 33rd Annual ACM Symposium on the Theory of Computing (STOC 2001): 241–246. • Schaefer, Marcus; Sedgwick, Eric; Štefankovič, Daniel (2003), "Recognizing string graphs in NP", Journal of Computer and System Sciences, 67 (2): 365–380, doi:10.1016/S0022-0000(03)00045-X. • Sinden, F. W. (1966), "Topology of thin film RC-circuits", Bell System Technical Journal, 45 (9): 1639–1662, doi:10.1002/j.1538-7305.1966.tb01713.x.
Wikipedia
String group In topology, a branch of mathematics, a string group is an infinite-dimensional group $\operatorname {String} (n)$ introduced by Stolz (1996) as a $3$-connected cover of a spin group. A string manifold is a manifold with a lifting of its frame bundle to a string group bundle. This means that in addition to being able to define holonomy along paths, one can also define holonomies for surfaces going between strings. There is a short exact sequence of topological groups $0\rightarrow K(\mathbb {Z} ,2)}\rightarrow \operatorname {String} (n)\rightarrow \operatorname {Spin} (n)\rightarrow 0$ where $K(\mathbb {Z} ,2)$ is an Eilenberg–MacLane space and $\operatorname {Spin} (n)$ is a spin group. The string group is an entry in the Whitehead tower (dual to the notion of Postnikov tower) for the orthogonal group: $\cdots \rightarrow \operatorname {Fivebrane} (n)\to \operatorname {String} (n)\rightarrow \operatorname {Spin} (n)\rightarrow \operatorname {SO} (n)\rightarrow \operatorname {O} (n)$ It is obtained by killing the $\pi _{3}$ homotopy group for $\operatorname {Spin} (n)$, in the same way that $\operatorname {Spin} (n)$ is obtained from $\operatorname {SO} (n)$ by killing $\pi _{1}$. The resulting manifold cannot be any finite-dimensional Lie group, since all finite-dimensional compact Lie groups have a non-vanishing $\pi _{3}$. The fivebrane group follows, by killing $\pi _{7}$. More generally, the construction of the Postnikov tower via short exact sequences starting with Eilenberg–MacLane spaces can be applied to any Lie group G, giving the string group String(G). Intuition for the string group The relevance of the Eilenberg-Maclane space $K(\mathbb {Z} ,2)$ lies in the fact that there are the homotopy equivalences $K(\mathbb {Z} ,1)\simeq U(1)\simeq B\mathbb {Z} $ for the classifying space $B\mathbb {Z} $, and the fact $K(\mathbb {Z} ,2)\simeq BU(1)$. Notice that because the complex spin group is a group extension $0\to K(\mathbb {Z} ,1)\to \operatorname {Spin} ^{\mathbb {C} }(n)\to \operatorname {Spin} (n)\to 0$ the String group can be thought of as a "higher" complex spin group extension, in the sense of higher group theory since the space $K(\mathbb {Z} ,2)$ is an example of a higher group. It can be thought of the topological realization of the groupoid $\mathbf {B} U(1)$ whose object is a single point and whose morphisms are the group $U(1)$. Note that the homotopical degree of $K(\mathbb {Z} ,2)$ is $2$, meaning its homotopy is concentrated in degree $2$, because it comes from the homotopy fiber of the map $\operatorname {String} (n)\to \operatorname {Spin} (n)$ from the Whitehead tower whose homotopy cokernel is $K(\mathbb {Z} ,3)$. This is because the homotopy fiber lowers the degree by $1$. Understanding the geometry The geometry of String bundles requires the understanding of multiple constructions in homotopy theory,[1] but they essentially boil down to understanding what $K(\mathbb {Z} ,2)$-bundles are, and how these higher group extensions behave. Namely, $K(\mathbb {Z} ,2)$-bundles on a space $M$ are represented geometrically as bundle gerbes since any $K(\mathbb {Z} ,2)$-bundle can be realized as the homotopy fiber of a map giving a homotopy square ${\begin{matrix}P&\to &*\\\downarrow &&\downarrow \\M&\xrightarrow {} &K(\mathbb {Z} ,3)\end{matrix}}$ where $K(\mathbb {Z} ,3)=B(K(\mathbb {Z} ,2))$. Then, a string bundle $S\to M$ must map to a spin bundle $\mathbb {S} \to M$ which is $K(\mathbb {Z} ,2)$-equivariant, analogously to how spin bundles map equivariantly to the frame bundle. Fivebrane group and higher groups The fivebrane group can similarly be understood[2] by killing the $\pi _{7}(\operatorname {Spin} (n))\cong \pi _{7}(\operatorname {O} (n))$ group of the string group $\operatorname {String} (n)$ using the Whitehead tower. It can then be understood again using an exact sequence of higher groups $0\to K(\mathbb {Z} ,6)\to \operatorname {Fivebrane} (n)\to \operatorname {String} (n)\to 0$ giving a presentation of $\operatorname {Fivebrane} (n)$ it terms of an iterated extension, i.e. an extension by $K(\mathbb {Z} ,6)$ by $\operatorname {String} (n)$. Note map on the right is from the Whitehead tower, and the map on the left is the homotopy fiber. See also • Gerbe • N-group (category theory) • Elliptic cohomology • String bordism References 1. Jurco, Branislav (August 2011). "Crossed Module Bundle Gerbes; Classification, String Group and Differential Geometry". International Journal of Geometric Methods in Modern Physics. 08 (5): 1079–1095. arXiv:math/0510078. Bibcode:2011IJGMM..08.1079J. doi:10.1142/S0219887811005555. ISSN 0219-8878. S2CID 1347840. 2. Sati, Hisham; Schreiber, Urs; Stasheff, Jim (November 2009). "Fivebrane Structures". Reviews in Mathematical Physics. 21 (10): 1197–1240. arXiv:0805.0564. Bibcode:2009RvMaP..21.1197S. doi:10.1142/S0129055X09003840. ISSN 0129-055X. S2CID 13307997. • Henriques, André G.; Douglas, Christopher L.; Hill, Michael A. (2011), "Homological obstructions to string orientations", Int. Math. Res. Notices, 18: 4074–4088, arXiv:0810.2131, Bibcode:2008arXiv0810.2131D • Wockel, Christoph; Sachse, Christoph; Nikolaus, Thomas (2013), "A Smooth Model for the String Group", International Mathematics Research Notices, 2013 (16): 3678–3721, arXiv:1104.4288, Bibcode:2011arXiv1104.4288N, doi:10.1093/imrn/rns154 • Stolz, Stephan (1996), "A conjecture concerning positive Ricci curvature and the Witten genus", Mathematische Annalen, 304 (4): 785–800, doi:10.1007/BF01446319, ISSN 0025-5831, MR 1380455, S2CID 123359573 • Stolz, Stephan; Teichner, Peter (2004), "What is an elliptic object?" (PDF), Topology, geometry and quantum field theory, London Math. Soc. Lecture Note Ser., vol. 308, Cambridge University Press, pp. 247–343, doi:10.1017/CBO9780511526398.013, ISBN 9780521540490, MR 2079378 External links • Baez, J. (2007), Higher Gauge Theory and the String Group • From Loop Groups to 2-groups - gives a characterization of String(n) as a 2-group • string group at the nLab • Whitehead tower at the nLab • What is an elliptic object?
Wikipedia
String topology String topology, a branch of mathematics, is the study of algebraic structures on the homology of free loop spaces. The field was started by Moira Chas and Dennis Sullivan (1999). Motivation While the singular cohomology of a space has always a product structure, this is not true for the singular homology of a space. Nevertheless, it is possible to construct such a structure for an oriented manifold $M$ of dimension $d$. This is the so-called intersection product. Intuitively, one can describe it as follows: given classes $x\in H_{p}(M)$ and $y\in H_{q}(M)$, take their product $x\times y\in H_{p+q}(M\times M)$ and make it transversal to the diagonal $M\hookrightarrow M\times M$. The intersection is then a class in $H_{p+q-d}(M)$, the intersection product of $x$ and $y$. One way to make this construction rigorous is to use stratifolds. Another case, where the homology of a space has a product, is the (based) loop space $\Omega X$ of a space $X$. Here the space itself has a product $m\colon \Omega X\times \Omega X\to \Omega X$ by going first through the first loop and then through the second one. There is no analogous product structure for the free loop space $LX$ of all maps from $S^{1}$ to $X$ since the two loops need not have a common point. A substitute for the map $m$ is the map $\gamma \colon {\rm {Map}}(S^{1}\lor S^{1},M)\to LM$ where ${\rm {Map}}(S^{1}\lor S^{1},M)$ is the subspace of $LM\times LM$, where the value of the two loops coincides at 0 and $\gamma $ is defined again by composing the loops. The Chas–Sullivan product The idea of the Chas–Sullivan product is to now combine the product structures above. Consider two classes $x\in H_{p}(LM)$ and $y\in H_{q}(LM)$. Their product $x\times y$ lies in $H_{p+q}(LM\times LM)$. We need a map $i^{!}\colon H_{p+q}(LM\times LM)\to H_{p+q-d}({\rm {Map}}(S^{1}\lor S^{1},M)).$ One way to construct this is to use stratifolds (or another geometric definition of homology) to do transversal intersection (after interpreting ${\rm {Map}}(S^{1}\lor S^{1},M)\subset LM\times LM$ as an inclusion of Hilbert manifolds). Another approach starts with the collapse map from $LM\times LM$ to the Thom space of the normal bundle of ${\rm {Map}}(S^{1}\lor S^{1},M)$. Composing the induced map in homology with the Thom isomorphism, we get the map we want. Now we can compose $i^{!}$ with the induced map of $\gamma $ to get a class in $H_{p+q-d}(LM)$, the Chas–Sullivan product of $x$ and $y$ (see e.g. Cohen & Jones (2002)). Remarks • As in the case of the intersection product, there are different sign conventions concerning the Chas–Sullivan product. In some convention, it is graded commutative, in some it is not. • The same construction works if we replace $H$ by another multiplicative homology theory $h$ if $M$ is oriented with respect to $h$. • Furthermore, we can replace $LM$ by $L^{n}M={\rm {Map}}(S^{n},M)$. By an easy variation of the above construction, we get that ${\mathcal {}}h_{*}({\rm {Map}}(N,M))$ is a module over ${\mathcal {}}h_{*}L^{n}M$ if $N$ is a manifold of dimensions $n$. • The Serre spectral sequence is compatible with the above algebraic structures for both the fiber bundle ${\rm {ev}}\colon LM\to M$ with fiber $\Omega M$ and the fiber bundle $LE\to LB$ for a fiber bundle $E\to B$, which is important for computations (see Cohen, Jones & Yan (2004) and Meier (2010) harvtxt error: no target: CITEREFMeier2010 (help)). The Batalin–Vilkovisky structure There is an action $S^{1}\times LM\to LM$ by rotation, which induces a map $H_{*}(S^{1})\otimes H_{*}(LM)\to H_{*}(LM)$. Plugging in the fundamental class $[S^{1}]\in H_{1}(S^{1})$, gives an operator $\Delta \colon H_{*}(LM)\to H_{*+1}(LM)$ of degree 1. One can show that this operator interacts nicely with the Chas–Sullivan product in the sense that they form together the structure of a Batalin–Vilkovisky algebra on ${\mathcal {}}H_{*}(LM)$. This operator tends to be difficult to compute in general. The defining identities of a Batalin-Vilkovisky algebra were checked in the original paper "by pictures." A less direct, but arguably more conceptual way to do that could be by using an action of a cactus operad on the free loop space $LM$.[1] The cactus operad is weakly equivalent to the framed little disks operad[2] and its action on a topological space implies a Batalin-Vilkovisky structure on homology.[3] Field theories There are several attempts to construct (topological) field theories via string topology. The basic idea is to fix an oriented manifold $M$ and associate to every surface with $p$ incoming and $q$ outgoing boundary components (with $n\geq 1$) an operation $H_{*}(LM)^{\otimes p}\to H_{*}(LM)^{\otimes q}$ which fulfills the usual axioms for a topological field theory. The Chas–Sullivan product is associated to the pair of pants. It can be shown that these operations are 0 if the genus of the surface is greater than 0 (Tamanoi (2010)). References 1. Voronov, Alexander (2005). "Notes on universal algebra". Graphs and Patterns in Mathematics and Theoretical Physics (M. Lyubich and L. Takhtajan, eds.). Providence, RI: Amer. Math. Soc. pp. 81–103. 2. Cohen, Ralph L.; Hess, Kathryn; Voronov, Alexander A. (2006). "The cacti operad". String topology and cyclic homology. Basel: Birkhäuser. ISBN 978-3-7643-7388-7. 3. Getzler, Ezra (1994). "Batalin-Vilkovisky algebras and two-dimensional topological field theories". Comm. Math. Phys. 159 (2): 265–285. arXiv:hep-th/9212043. Bibcode:1994CMaPh.159..265G. doi:10.1007/BF02102639. S2CID 14823949. Sources • Chas, Moira; Sullivan, Dennis (1999). "String Topology". arXiv:math/9911159v1. • Cohen, Ralph L.; Jones, John D. S. (2002). "A homotopy theoretic realization of string topology". Mathematische Annalen. 324 (4): 773–798. arXiv:math/0107187. doi:10.1007/s00208-002-0362-0. MR 1942249. S2CID 16916132. • Cohen, Ralph Louis; Jones, John D. S.; Yan, Jun (2004). "The loop homology algebra of spheres and projective spaces". In Arone, Gregory; Hubbuck, John; Levi, Ran; Weiss, Michael (eds.). Categorical decomposition techniques in algebraic topology: International Conference in Algebraic Topology, Isle of Skye, Scotland, June 2001. Birkhäuser. pp. 77–92. • Meier, Lennart (2011). "Spectral Sequences in String Topology". Algebraic & Geometric Topology. 11 (5): 2829–2860. arXiv:1001.4906. doi:10.2140/agt.2011.11.2829. MR 2846913. S2CID 58893087. • Tamanoi, Hirotaka (2010). "Loop coproducts in string topology and triviality of higher genus TQFT operations". Journal of Pure and Applied Algebra. 214 (5): 605–615. arXiv:0706.1276. doi:10.1016/j.jpaa.2009.07.011. MR 2577666. S2CID 2147096.
Wikipedia
Strip algebra Strip Algebra is a set of elements and operators for the description of carbon nanotube structures, considered as a subgroup of polyhedra, and more precisely, of polyhedra with vertices formed by three edges. This restriction is imposed on the polyhedra because carbon nanotubes are formed of sp2 carbon atoms. Strip Algebra was developed initially [1] for the determination of the structure connecting two arbitrary nanotubes, but has also been extended to the connection of three identical nanotubes [2] Background Graphitic systems are molecules and crystals formed of carbon atoms in sp2 hybridization. Thus, the atoms are arranged on a hexagonal grid. Graphite, nanotubes, and fullerenes are examples of graphitic systems. All of them share the property that each atom is bonded to three others (3-valent). The relation between the number of vertices, edges and faces of any finite polyhedron is given by Euler's polyhedron formula: $e-f-v=2(g-1),\,$ where e, f and v are the number of edges, faces and vertices, respectively, and g is the genus of the polyhedron, i.e., the number of "holes" in the surface. For example, a sphere is a surface of genus 0, while a torus is of genus 1. Nomenclature A substrip is identified by a pair of natural numbers measuring the position of the last ring in parentheses, together with the turns induced by the defect ring. The number of edges of the defect can be extracted from these. $(n,m)[T_{+},T_{-}]$ Elements A Strip is defined as a set of consecutive rings, that is able to be joined with others, by sharing a side of the first or last ring. Numerous complex structures can be formed with strips. As said before, strips have both at the beginning and at the end two connections. With strips only, can be formed two of them. Operators Given the definition of a strip, a set of operations may be defined. These are necessary to find out the combined result of a set of contiguous strips. • Addition of two strips: (upcoming) • Turn Operators: (upcoming) • Inversion of a strip: (upcoming) Applications • Strip Algebra has been applied to the construction of nanotube heterojunctions, and was first implemented in the CoNTub v1.0 software, which makes it possible to find the precise position of all the carbon rings needed to produce a heterojunction with arbitrary indices and chirality from two nanotubes. References 1. Melchor, S.; Khokhriakov, N.V.; Savinskii, S.S. (1999). "Geometry of Multi-Tube Carbon Clusters and Electronic Transmission in Nanotube Contacts". Molecular Engineering. 8 (4): 315–344. doi:10.1023/A:1008342925348. 2. Melchor, S.; Martin-Martinez, F.J.; Dobado, J.A. (2011). "CoNTub v2.0 - Algorithms for Constructing C3-Symmetric Models of Three-Nanotube Junctions". J. Chem. Inf. Model. 51: 1492–1505. doi:10.1021/ci200056p.
Wikipedia
Strip packing problem The strip packing problem is a 2-dimensional geometric minimization problem. Given a set of axis-aligned rectangles and a strip of bounded width and infinite height, determine an overlapping-free packing of the rectangles into the strip minimizing its height. This problem is a cutting and packing problem and is classified as an Open Dimension Problem according to Wäscher et al.[1] This problem arises in the area of scheduling, where it models jobs, that require a contiguous portion of the memory over a given time period. Another example is the area of industrial manufacturing, where rectangular pieces need to be cut out of a sheet of material (e.g., cloth or paper) that has a fixed width but infinite length, and one wants to minimize the wasted material. This problem was first studied in 1980.[2] It is strongly-NP hard and there exists no polynomial time approximation algorithm with a ratio smaller than $3/2$ unless $P=NP$. However, the best approximation ratio achieved so far (by a polynomial time algorithm by Harren et al.[3]) is $(5/3+\varepsilon )$, imposing an open question whether there is an algorithm with approximation ratio $3/2$. Definition An instance $I=({\mathcal {I}},W)$ of the strip packing problem consists of a strip with width $W=1$ and infinite height, as well as a set ${\mathcal {I}}$ of rectangular items. Each item $i\in {\mathcal {I}}$ has a width $w_{i}\in (0,1]\cap \mathbb {Q} $ and a height $h_{i}\in (0,1]\cap \mathbb {Q} $. A packing of the items is a mapping that maps each lower-left corner of an item $i\in {\mathcal {I}}$ to a position $(x_{i},y_{i})\in ([0,1-w_{i}]\cap \mathbb {Q} )\times \mathbb {Q} _{\geq 0}$ inside the strip. An inner point of a placed item $i\in {\mathcal {I}}$ is a point from the set $\mathrm {inn} (i)=\{(x,y)\in \mathbb {Q} \times \mathbb {Q} |x_{i}<x<x_{i}+w_{i},y_{i}<y<y_{i}+h_{i}\}$. Two (placed) items overlap if they share an inner point. The height of the packing is defined as $\max\{y_{i}+h_{i}|i\in {\mathcal {I}}\}$. The objective is to find an overlapping-free packing of the items inside the strip while minimizing the height of the packing. This definition is used for all polynomial time algorithms. For pseudo-polynomial time and FPT-algorithms, the definition is slightly changed for the simplification of notation. In this case, all appearing sizes are integral. Especially the width of the strip is given by an arbitrary integer number larger than 1. Note that these two definitions are equivalent. Variants There are several variants of the strip packing problem that have been studied. These variants concern the geometry of the objects, dimension of the problem, if it is allowed to rotate the items, and the structure of the packing.[4] Geometry of the items: In the standard variant of this problem, the set of given items consists of rectangles. In an often considered subcase, all the items have to be squares. This variant was already considered in the first paper about strip packing.[2] Additionally, variants have been studied where the shapes are circular or even irregular. In the latter case, we speak of irregular strip packing. Dimension: When not mentioned differently the strip packing problem is a 2-dimensional problem. However, it also has been studied in three or even more dimensions. In this case, the objects are hyperrectangles, and the strip is open-ended in one dimension and bounded in the residual ones. Rotation: In the classical strip packing problem, it is not allowed to rotate the items. However, variants have been studied where rotating by 90 degrees or even an arbitrary angle is allowed. Structure of the packing: In the general strip packing problem, the structure of the packing is irrelevant. However, there are applications that have explicit requirements on the structure of the packing. One of these requirements is to be able to cut the items from the strip by horizontal or vertical edge to edge cuts. Packings that allow this kind of cutting are called guillotine packing. Hardness The strip packing problem contains the bin packing problem as a special case when all the items have the same height 1. For this reason, it is strongly NP-hard and there can be no polynomial time approximation algorithm, which has an approximation ratio smaller than $3/2$ unless $P=NP$. Furthermore, unless $P=NP$, there cannot be a pseudo-polynomial time algorithm that has an approximation ratio smaller than $5/4$,[5] which can be proven by a reduction from the strongly NP-complete 3-partition problem. Note that both lower bounds $3/2$ and $5/4$ also hold for the case that a rotation of the items by 90 degrees is allowed. Additionally, it was proven by Ashok et al.[6] that strip packing is W[1]-hard when parameterized by the height of the optimal packing. Properties of optimal solutions There are two trivial lower bounds on optimal solutions. The first is the height of the largest item. Define $h_{\max }(I):=\max\{h(i)|i\in {\mathcal {I}}\}$. Then it holds that $OPT(I)\geq h_{\max }(I)$. Another lower bound is given by the total area of the items. Define $\mathrm {AREA} ({\mathcal {I}}):=\sum _{i\in {\mathcal {I}}}h(i)w(i)$ then it holds that $OPT(I)\geq \mathrm {AREA} ({\mathcal {I}})/W$. The following two lower bounds take notice of the fact that certain items cannot be placed next to each other in the strip, and can be computed in ${\mathcal {O}}(n\log(n))$.[7] For the first lower bound assume that the items are sorted by non-increasing height. Define $k:=\max\{i:\sum _{j=1}^{k}w(j)\leq W\}$. For each $l>k$ define $i(l)\leq k$ the first index such that $w(l)+\sum _{j=1}^{i(l)}w(j)>W$. Then it holds that $OPT(I)\geq \max\{h(l)+h(i(l))|l>k\wedge w(l)+\sum _{j=1}^{i(l)}w(j)>W\}$.[7] For the second lower bound, partition the set of items into three sets. Let $\alpha \in [1,W/2]\cap \mathbb {N} $ and define ${\mathcal {I}}_{1}(\alpha ):=\{i\in {\mathcal {I}}|w(i)>W-\alpha \}$, ${\mathcal {I}}_{2}(\alpha ):=\{i\in {\mathcal {I}}|W-\alpha \geq w(i)>W/2\}$, and ${\mathcal {I}}_{3}(\alpha ):=\{i\in {\mathcal {I}}|W/2\geq w(i)>\alpha \}$. Then it holds that $OPT(I)\geq \max _{\alpha \in [1,W/2]\cap \mathbb {N} }{\Bigg \{}\sum _{i\in {\mathcal {I}}_{1}(\alpha )\cup {\mathcal {I}}_{2}(\alpha )}h(i)+\left({\frac {\sum _{i\in {\mathcal {I}}_{3}(\alpha )h(i)w(i)-\sum _{i\in {\mathcal {I}}_{2}(\alpha )}(W-w(i))h(i)}}{W}}\right)_{+}{\Bigg \}}$,[7] where $(x)_{+}:=\max\{x,0\}$ for each $x\in \mathbb {R} $. On the other hand, Steinberg[8] has shown that the height of an optimal solution can be upper bounded by $OPT(I)\leq 2\max\{h_{\max }(I),\mathrm {AREA} ({\mathcal {I}})/W\}.$ More precisely he showed that given a $W\geq w_{\max }({\mathcal {I}})$ and a $H\geq h_{\max }(I)$ then the items ${\mathcal {I}}$ can be placed inside a box with width $W$ and height $H$ if $WH\geq 2\mathrm {AREA} ({\mathcal {I}})+(2w_{\max }({\mathcal {I}})-W)_{+}(2h_{\max }(I)-H)_{+}$, where $(x)_{+}:=\max\{x,0\}$. Polynomial time approximation algorithms Since this problem is NP-hard, approximation algorithms have been studied for this problem. Most of the heuristic approaches have an approximation ratio between $3$ and $2$. Finding an algorithm with a ratio below $2$ seems hard, and the complexity of the corresponding algorithms increases regarding their running time and their descriptions. The smallest approximation ratio achieved so far is $(5/3+\varepsilon )$. Overview of polynomial time approximations YearNameApproximation guaranteeSource 1980Bottom-Up Left-Justified (BL)$3OPT(I)$Baker et al.[2] 1980 Next-Fit Decreasing-Height (NFDH) $2OPT(I)+h_{\max }(I)\leq 3OPT(I)$ Coffman et al.[9] First-Fit Decreasing-Height (FFDH) $1.7OPT(I)+h_{\max }(I)\leq 2.7OPT(I)$ Split-Fit (SF) $1.5OPT(I)+2h_{\max }(I)$ 1980$2OPT(I)+h_{\max }(I)/2\leq 2.5OPT(I)$Sleator[10] 1981 Split Algorithm (SP) $3OPT(I)$ Golan[11] Mixed Algoritghm $(4/3)OPT(I)+7{\frac {1}{18}}h_{\max }(I)$ 1981Up-Down (UD)$(5/4)OPT(I)+6{\frac {7}{8}}h_{\max }(I)$Baker et al.[12] 1994Reverse-Fit$2OPT(I)$Schiermeyer[13] 1997$2OPT(I)$Steinberg[8] 2000 $(1+\varepsilon )OPT(I)+{\mathcal {O}}(1/\varepsilon ^{2})h_{\max }(I)$ Kenyon, Rémila[14] 2009 $1.9396OPT(I)$ Harren, van Stee[15] 2009$(1+\varepsilon )OPT(I)+h_{\max }(I)$Jansen, Solis-Oba[16] 2011 $(1+\varepsilon )OPT(I)+{\mathcal {O}}(\log(1/\varepsilon )/\varepsilon )h_{\max }(I)$ Bougeret et al.[17] 2012 $(1+\varepsilon )OPT(I)+{\mathcal {O}}(\log(1/\varepsilon )/\varepsilon )h_{\max }(I)$ Sviridenko[18] 2014$(5/3+\varepsilon )OPT(I)$Harren et al.[3] Bottom-up left-justified (BL) This algorithm was first described by Baker et al.[2] It works as follows: Let $L$ be a sequence of rectangular items. The algorithm iterates the sequence in the given order. For each considered item $r\in L$, it searches for the bottom-most position to place it and then shifts it as far to the left as possible. Hence, it places $r$ at the bottom-most left-most possible coordinate $(x,y)$ in the strip. This algorithm has the following properties: • The approximation ratio of this algorithm cannot be bounded by a constant. More precisely they showed that for each $M>0$ there exists a list $L$ of rectangular items ordered by increasing width such that $BL(L)/OPT(L)>M$, where $BL(L)$ is the height of the packing created by the BL algorithm and $OPT(L)$ is the height of the optimal solution for $L$.[2] • If the items are ordered by decreasing widths, then $BL(L)/OPT(L)\leq 3$.[2] • If the item are all squares and are ordered by decreasing widths, then $BL(L)/OPT(L)\leq 2$.[2] • For any $\delta >0$, there exists a list $L$ of rectangles ordered by decreasing widths such that $BL(L)/OPT(L)>3-\delta $.[2] • For any $\delta >0$, there exists a list $L$ of squares ordered by decreasing widths such that $BL(L)/OPT(L)>2-\delta $.[2] • For each $\varepsilon \in (0,1]$, there exists an instance containing only squares where each order of the squares $L$ has a ratio of $BL(L)/OPT(L)>{\frac {12}{11+\varepsilon }}$, i.e., there exist instances where BL does not find the optimum even when iterating all possible orders of the items.[2] Next-fit decreasing-height (NFDH) This algorithm was first described by Coffman et al.[9] in 1980 and works as follows: Let ${\mathcal {I}}$ be the given set of rectangular items. First, the algorithm sorts the items by order of nonincreasing height. Then, starting at position $(0,0)$, the algorithm places the items next to each other in the strip until the next item will overlap the right border of the strip. At this point, the algorithm defines a new level at the top of the tallest item in the current level and places the items next to each other in this new level. This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|\log(|{\mathcal {I}}|))$ and if the items are already sorted even by ${\mathcal {O}}(|{\mathcal {I}}|)$. • For every set of items ${\mathcal {I}}$, it produces a packing of height $NFDH({\mathcal {I}})\leq 2OPT({\mathcal {I}})+h_{\max }\leq 3OPT({\mathcal {I}})$, where $h_{\max }$ is the largest height of an item in ${\mathcal {I}}$.[9] • For every $\varepsilon >0$ there exists a set of rectangles ${\mathcal {I}}$ such that $NFDH({\mathcal {I}}|)>(2-\varepsilon )OPT({\mathcal {I}}).$[9] • The packing generated is a guillotine packing. This means the items can be obtained through a sequence of horizontal or vertical edge-to-edge cuts. First-fit decreasing-height (FFDH) This algorithm, first described by Coffman et al.[9] in 1980, works similar to the NFDH algorithm. However, when placing the next item, the algorithm scans the levels from bottom to top and places the item in the first level on which it will fit. A new level is only opened if the item does not fit in any previous ones. This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|^{2})$, since there are at most $|{\mathcal {I}}|$ levels. • For every set of items ${\mathcal {I}}$ it produces a packing of height $FFDH({\mathcal {I}})\leq 1.7OPT({\mathcal {I}})+h_{\max }\leq 2.7OPT({\mathcal {I}})$, where $h_{\max }$ is the largest height of an item in ${\mathcal {I}}$.[9] • Let $m\geq 2$. For any set of items ${\mathcal {I}}$ and strip with width $W$ such that $w(i)\leq W/m$ for each $i\in {\mathcal {I}}$, it holds that $FFDH({\mathcal {I}})\leq \left(1+1/m\right)OPT({\mathcal {I}})+h_{\max }$. Furthermore, for each $\varepsilon >0$, there exists such a set of items ${\mathcal {I}}$ with $FFDH({\mathcal {I}})>\left(1+1/m-\varepsilon \right)OPT({\mathcal {I}})$.[9] • If all the items in ${\mathcal {I}}$ are squares, it holds that $FFDH({\mathcal {I}})\leq (3/2)OPT({\mathcal {I}})+h_{\max }$. Furthermore, for each $\varepsilon >0$, there exists a set of squares ${\mathcal {I}}$ such that $FFDH({\mathcal {I}})>\left(3/2-\varepsilon \right)OPT({\mathcal {I}})$.[9] • The packing generated is a guillotine packing. This means the items can be obtained through a sequence of horizontal or vertical edge-to-edge cuts. The split-fit algorithm (SF) This algorithm was first described by Coffman et al.[9] For a given set of items ${\mathcal {I}}$ and strip with width $W$, it works as follows: 1. Determinate $m\in \mathbb {N} $, the largest integer such that the given rectangles have width $W/m$ or less. 2. Divide ${\mathcal {I}}$ into two sets ${\mathcal {I}}_{wide}$ and ${\mathcal {I}}_{narrow}$, such that ${\mathcal {I}}_{wide}$ contains all the items $i\in {\mathcal {I}}$ with a width $w(i)>W/(m+1)$ while ${\mathcal {I}}_{narrow}$ contains all the items with $w(i)\leq W/(m+1)$. 3. Order ${\mathcal {I}}_{wide}$ and ${\mathcal {I}}_{narrow}$ by nonincreasing height. 4. Pack the items in ${\mathcal {I}}_{wide}$ with the FFDH algorithm. 5. Reorder the levels/shelves constructed by FFDH such that all the shelves with a total width larger than $W(m+1)/(m+2)$ are below the more narrow ones. 6. This leaves a rectangular area $R$ of with $W/(m+2)$, next to more narrow levels/shelves, that contains no item. 7. Use the FFDH algorithm to pack the items in ${\mathcal {I}}_{narrow}$ using the area $R$ as well. This algorithm has the following properties: • For every set of items ${\mathcal {I}}$ and the corresponding $m$, it holds that $SF({\mathcal {I}})\leq (m+2)/(m+1)OPT({\mathcal {I}})+2h_{\max }$.[9] Note that for $m=1$, it holds that $SF({\mathcal {I}})\leq (3/2)OPT({\mathcal {I}})+2h_{\max }$ • For each $\varepsilon >0$, there is a set of items ${\mathcal {I}}$ such that $SF({\mathcal {I}})>\left((m+2)/(m+1)-\varepsilon \right)OPT({\mathcal {I}})$.[9] Sleator's algorithm For a given set of items ${\mathcal {I}}$ and strip with width $W$, it works as follows: 1. Find all the items with a width larger than $W/2$ and stack them at the bottom of the strip (in random order). Call the total height of these items $h_{0}$. All the other items will be placed above $h_{0}$. 2. Sort all the remaining items in nonincreasing order of height. The items will be placed in this order. 3. Consider the horizontal line at $h_{0}$ as a shelf. The algorithm places the items on this shelf in nonincreasing order of height until no item is left or the next one does not fit. 4. Draw a vertical line at $W/2$, which cuts the strip into two equal halves. 5. Let $h_{l}$ be the highest point covered by any item in the left half and $h_{r}$ the corresponding point on the right half. Draw two horizontal line segments of length $W/2$ at $h_{l}$ and $h_{r}$ across the left and the right half of the strip. These two lines build new shelves on which the algorithm will place the items, as in step 3. Choose the half which has the lower shelf and place the items on this shelf until no other item fits. Repeat this step until no item is left. This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|\log(|{\mathcal {I}}|))$ and if the items are already sorted even by ${\mathcal {O}}(|{\mathcal {I}}|)$. • For every set of items ${\mathcal {I}}$ it produces a packing of height $A({\mathcal {I}})\leq 2OPT({\mathcal {I}})+h_{\max }/2\leq 2.5OPT({\mathcal {I}})$, where $h_{\max }$ is the largest height of an item in ${\mathcal {I}}$.[10] The split algorithm (SP) This algorithm is an extension of Sleator's approach and was first described by Golan.[11] It places the items in nonincreasing order of width. The intuitive idea is to split the strip into sub-strips while placing some items. Whenever possible, the algorithm places the current item $i$ side-by-side of an already placed item $j$. In this case, it splits the corresponding sub-strip into two pieces: one containing the first item $j$ and the other containing the current item $i$. If this is not possible, it places $i$ on top of an already placed item and does not split the sub-strip. This algorithm creates a set S of sub-strips. For each sub-strip s ∈ S we know its lower left corner s.xposition and s.yposition, its width s.width, the horizontal lines parallel to the upper and lower border of the item placed last inside this sub-strip s.upper and s.lower, as well as the width of it s.itemWidth. function Split Algorithm (SP) is input: items I, width of the strip W output: A packing of the items Sort I in nonincreasing order of widths; Define empty list S of sub-strips; Define a new sub-strip s with s.xposition = 0, s.yposition = 0, s.width = W, s.lower = 0, s.upper = 0, s.itemWidth = W; Add s to S; while I not empty do i := I.pop(); Removes widest item from I Define new list S_2 containing all the substrips with s.width - s.itemWidth ≥ i.width; S_2 contains all sub-strips where i fits next to the already placed item if S_2 is empty then In this case, place the item on top of another one. Find the sub-strip s in S with smallest s.upper; i.e. the least filled sub-strip Place i at position (s.xposition, s.upper); Update s: s.lower := s.upper; s.upper := s.upper+i.height; s.itemWidth := i.width; else In this case, place the item next to another one at the same level and split the corresponding sub-strip at this position. Find s ∈ S_2 with the smallest s.lower; Place i at position (s.xposition + s.itemWidth, s.lower); Remove s from S; Define two new sub-strips s1 and s2 with s1.xposition = s.xposition, s1.yposition = s.upper, s1.width = s.itemWidth, s1.lower = s.upper, s1.upper = s.upper, s1.itemWidth = s.itemWidth; s2.xposition = s.xposition+s.itemWidth, s2.yposition = s.lower, s2.width = s.width - s.itemWidth, s2.lower = s.lower, s2.upper = s.lower + i.height, s2.itemWidth = i.width; S.add(s1,s2); return end function This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|^{2})$ since the number of substrips is bounded by $|{\mathcal {I}}|$. • For any set of items ${\mathcal {I}}$ it holds that $SP({\mathcal {I}})\leq 2OPT({\mathcal {I}})+h_{\max }\leq 3OPT({\mathcal {I}})$.[11] • For any $\varepsilon >0$, there exists a set of items ${\mathcal {I}}$ such that $SP({\mathcal {I}})>(3-\varepsilon )OPT({\mathcal {I}})$.[11] • For any $\varepsilon >0$ and $C>0$, there exists a set of items ${\mathcal {I}}$ such that $SP({\mathcal {I}})>(2-\varepsilon )OPT({\mathcal {I}})+C$.[11] Reverse-fit (RF) This algorithm was first described by Schiermeyer.[13] The description of this algorithm needs some additional notation. For a placed item $i\in {\mathcal {I}}$, its lower left corner is denoted by $(a_{i},c_{i})$ and its upper right corner by $(b_{i},d_{i})$. Given a set of items ${\mathcal {I}}$ and a strip of width $W$, it works as follows: 1. Stack all the rectangles of width greater than $W/2$ on top of each other (in random order) at the bottom of the strip. Denote by $H_{0}$ the height of this stack. All other items will be packed above $H_{0}$. 2. Sort the remaining items in order of nonincreasing height and consider the items in this order in the following steps. Let $h_{\max }$ be the height of the tallest of these remaining items. 3. Place the items one by one left aligned on a shelf defined by $H_{0}$ until no other item fit on this shelf or there is no item left. Call this shelf the first level. 4. Let $h_{1}$ be the height of the tallest unpacked item. Define a new shelf at $H_{0}+h_{\max }+h_{1}$. The algorithm will fill this shelf from right to left, aligning the items to the right, such that the items touch this shelf with their top. Call this shelf the second reverse-level. 5. Place the items into the two shelves due to First-Fit, i.e., placing the items in the first level where they fit and in the second one otherwise. Proceed until there are no items left, or the total width of the items in the second shelf is at least $W/2$. 6. Shift the second reverse-level down until an item from it touches an item from the first level. Define $H_{1}$ as the new vertical position of the shifted shelf. Let $f$ and $s$ be the right most pair of touching items with $f$ placed on the first level and $s$ on the second reverse-level. Define $x_{r}:=\max(b_{f},b_{s})$. 7. If $x_{r}<W/2$ then $s$ is the last rectangle placed in the second reverse-level. Shift all the other items from this level further down (all the same amount) until the first one touches an item from the first level. Again the algorithm determines the rightmost pair of touching items $f'$ and $s'$. Define $h_{2}$ as the amount by which the shelf was shifted down. 1. If $h_{2}\leq h(s)$ then shift $s$ to the left until it touches another item or the border of the strip. Define the third level at the top of $s'$. 2. If $h_{2}>h(s)$ then shift $s$ define the third level at the top of $s'$. Place $s$ left-aligned in this third level, such that it touches an item from the first level on its left. 8. Continue packing the items using the First-Fit heuristic. Each following level (starting at level three) is defined by a horizontal line through the top of the largest item on the previous level. Note that the first item placed in the next level might not touch the border of the strip with their left side, but an item from the first level or the item $s$. This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|^{2})$, since there are at most $|{\mathcal {I}}|$ levels. • For every set of items ${\mathcal {I}}$ it produces a packing of height $RF({\mathcal {I}})\leq 2OPT({\mathcal {I}})$.[13] Steinberg's algorithm (ST) Steinbergs algorithm is a recursive one. Given a set of rectangular items ${\mathcal {I}}$ and a rectangular target region with width $W$ and height $H$, it proposes four reduction rules, that place some of the items and leaves a smaller rectangular region with the same properties as before regarding of the residual items. Consider the following notations: Given a set of items ${\mathcal {I}}$ we denote by $h_{\max }({\mathcal {I}})$ the tallest item height in ${\mathcal {I}}$, $w_{\max }({\mathcal {I}})$ the largest item width appearing in ${\mathcal {I}}$ and by $\mathrm {AREA} ({\mathcal {I}}):=\sum _{i\in {\mathcal {I}}}w(i)h(i)$ the total area of these items. Steinbergs shows that if $h_{\max }({\mathcal {I}})\leq H$, $w_{\max }({\mathcal {I}})\leq W$, and $\mathrm {AREA} ({\mathcal {I}})\leq W\cdot H-(2h_{\max }({\mathcal {I}})-h)_{+}(2w_{\max }({\mathcal {I}})-W)_{+}$, where $(a)_{+}:=\max\{0,a\}$, then all the items can be placed inside the target region of size $W\times H$. Each reduction rule will produce a smaller target area and a subset of items that have to be placed. When the condition from above holds before the procedure started, then the created subproblem will have this property as well. Procedure 1: It can be applied if $w_{\max }({\mathcal {I}}')\geq W/2$. 1. Find all the items $i\in {\mathcal {I}}$ with width $w(i)\geq W/2$ and remove them from ${\mathcal {I}}$. 2. Sort them by nonincreasing width and place them left-aligned at the bottom of the target region. Let $h_{0}$ be their total height. 3. Find all the items $i\in {\mathcal {I}}$ with width $h(i)>H-h_{0}$. Remove them from ${\mathcal {I}}$ and place them in a new set ${\mathcal {I}}_{H}$. 4. If ${\mathcal {I}}_{H}$ is empty, define the new target region as the area above $h_{0}$, i.e. it has height $H-h_{0}$ and width $W$. Solve the problem consisting of this new target region and the reduced set of items with one of the procedures. 5. If ${\mathcal {I}}_{H}$ is not empty, sort it by nonincreasing height and place the items right allinged one by one in the upper right corner of the target area. Let $w_{0}$ be the total width of these items. Define a new target area with width $W-w_{0}$ and height $H-h_{0}$ in the upper left corner. Solve the problem consisting of this new target region and the reduced set of items with one of the procedures. Procedure 2: It can be applied if the following conditions hold: $w_{\max }({\mathcal {I}})\leq W/2$, $h_{\max }({\mathcal {I}})\leq H/2$, and there exist two different items $i,i'\in {\mathcal {I}}$ with $w(i)\geq W/4$, $w(i')\geq W/4$, $h(i)\geq H/4$, $h(i')\geq H/4$ and $2(\mathrm {AREA} ({\mathcal {I}})-w(i)h(i)-w(i')h(i'))\leq (W-\max\{w(i),w(i')\})H$. 1. Find $i$ and $i'$ and remove them from ${\mathcal {I}}$. 2. Place the wider one in the lower-left corner of the target area and the more narrow one left-aligned on the top of the first. 3. Define a new target area on the right of these both items, such that it has the width $W-\max\{w(i),w(i')\}$ and height $H$. 4. Place the residual items in ${\mathcal {I}}$ into the new target area using one of the procedures. Procedure 3: It can be applied if the following conditions hold: $w_{\max }({\mathcal {I}})\leq W/2$, $h_{\max }({\mathcal {I}})\leq H/2$, $|{\mathcal {I}}|>1$, and when sorting the items by decreasing width there exist an index $m$ such that when defining ${\mathcal {I'}}$ as the first $m$ items it holds that $\mathrm {AREA} ({\mathcal {I}})-WH/4\leq \mathrm {AREA} ({\mathcal {I'}})\leq 3WH/8$ as well as $w(i_{m+1})\leq W/4$ 1. Set $W_{1}:=\max {W/2,2\mathrm {AREA} ({\mathcal {I'}})/H}$. 2. Define two new rectangular target areas one at the lower-left corner of the original one with height $H$ and width $W_{1}$ and the other left of it with height $H$ and width $W-W_{1}$. 3. Use one of the procedures to place the items in ${\mathcal {I'}}$ into the first new target area and the items in ${\mathcal {I}}\setminus {\mathcal {I'}}$ into the second one. Note that procedures 1 to 3 have a symmetric version when swapping the height and the width of the items and the target region. Procedure 4: It can be applied if the following conditions hold: $w_{\max }({\mathcal {I}})\leq W/2$, $h_{\max }({\mathcal {I}})\leq H/2$, and there exists an item $i\in {\mathcal {I}}$ such that $w(i)h(i)\geq \mathrm {AREA} ({\mathcal {I}})-WH/4$. 1. Place the item $i$ in the lower-left corner of the target area and remove it from ${\mathcal {I}}$. 2. Define a new target area right of this item such that it has the width $W-w(i)$ and height $H$ and place the residual items inside this area using one of the procedures. This algorithm has the following properties: • The running time can be bounded by ${\mathcal {O}}(|{\mathcal {I}}|\log(|{\mathcal {I}}|)^{2}/\log(\log(|{\mathcal {I}}|)))$.[8] • For every set of items ${\mathcal {I}}$ it produces a packing of height $ST({\mathcal {I}})\leq 2OPT({\mathcal {I}})$.[8] Pseudo-polynomial time approximation algorithms To improve upon the lower bound of $3/2$ for polynomial-time algorithms, pseudo-polynomial time algorithms for the strip packing problem have been considered. When considering this type of algorithms, all the sizes of the items and the strip are given as integrals. Furthermore, the width of the strip $W$ is allowed to appear polynomially in the running time. Note that this is no longer considered as a polynomial running time since, in the given instance, the width of the strip needs an encoding size of $\log(W)$. The pseudo-polynomial time algorithms that have been developed mostly use the same approach. It is shown that each optimal solution can be simplified and transformed into one that has one of a constant number of structures. The algorithm then iterates all these structures and places the items inside using linear and dynamic programming. The best ratio accomplished so far is $(5/4+\varepsilon )OPT(I)$.[19] while there cannot be a pseudo-polynomial time algorithm with ratio better than $5/4$ unless $P=NP$[5] Overview of pseudo-polynomial time approximations YearApproximation RatioSourceComment 2010 $(3/2+\varepsilon )$ Jansen, Thöle[20] 2016 $(7/5+\varepsilon )$ Nadiradze, Wiese[21] 2016 $(4/3+\varepsilon )$ Gálvez, Grandoni, Ingala, Khan[22] also for 90 degree rotations 2017 $(4/3+\varepsilon )$ Jansen, Rau[23] 2019 $(5/4+\varepsilon )$ Jansen, Rau[19] also for 90 degree rotations and contiguous moldable jobs Online algorithms In the online variant of strip packing, the items arrive over time. When an item arrives, it has to be placed immediately before the next item is known. There are two types of online algorithms that have been considered. In the first variant, it is not allowed to alter the packing once an item is placed. In the second, items may be repacked when another item arrives. This variant is called the migration model. The quality of an online algorithm is measured by the (absolute) competitive ratio $\mathrm {sup} _{I}A(I)/OPT(I)$, where $A(I)$ corresponds to the solution generated by the online algorithm and $OPT(I)$ corresponds to the size of the optimal solution. In addition to the absolute competitive ratio, the asymptotic competitive ratio of online algorithms has been studied. For instances $I$ with $h_{\max }(I)\leq 1$ it is defined as $\lim \mathrm {sup} _{OPT(I)\rightarrow \infty }A(I)/OPT(I)$. Note that all the instances can be scaled such that $h_{\max }(I)\leq 1$. Overview of online algorithms without migration YearCompetitive RatioAsymptotic Competitive RatioSource 1983 6.99 $\approx 1.7$ Baker and Schwarz[24] 1997 $1.69+\varepsilon $ Csirik and Woeginger[25] 2007 6.6623 Hurink and Paulus[26] 2009 6.6623 Ye, Han, and Zhang[27] 2007 $1.58889$ Han et al.[28] + Seiden[29] The framework of Han et al.[28] is applicable in the online setting if the online bin packing algorithm belongs to the class Super Harmonic. Thus, Seiden's online bin packing algorithm Harmonic++[29] implies an algorithm for online strip packing with asymptotic ratio 1.58889. Overview of lower bounds for online algorithms without migration YearCompetitive RatioAsymptotic Competitive RatioSourceComment 1982 $2$ Brown, Baker, and Katseff[30] 2006 2.25 Johannes[31] also holds for the parallel task scheduling problem 2007 2.43 Hurink and Paulus[32] also holds for the parallel task scheduling problem 2009 2.457 Kern and Paulus [33] 2012 $1.5404$ Balogh and Békési[34] lower bound due to the underlying bin packing problem 2016 2.618 Yu, Mao, and Xiao[35] References 1. Wäscher, Gerhard; Haußner, Heike; Schumann, Holger (16 December 2007). "An improved typology of cutting and packing problems". European Journal of Operational Research. 183 (3): 1109–1130. doi:10.1016/j.ejor.2005.12.047. ISSN 0377-2217. 2. Baker, Brenda S.; Coffman Jr., Edward G.; Rivest, Ronald L. (1980). "Orthogonal Packings in Two Dimensions". SIAM J. Comput. 9 (4): 846–855. doi:10.1137/0209064. 3. Harren, Rolf; Jansen, Klaus; Prädel, Lars; van Stee, Rob (February 2014). "A (5/3 + epsilon)-approximation for strip packing". Computational Geometry. 47 (2): 248–267. doi:10.1016/j.comgeo.2013.08.008. 4. Neuenfeldt Junior, Alvaro Luiz. "The Two-Dimensional Rectangular Strip Packing Problem" (PDF). 10820228. 5. Henning, Sören; Jansen, Klaus; Rau, Malin; Schmarje, Lars (2019). "Complexity and Inapproximability Results for Parallel Task Scheduling and Strip Packing". Theory of Computing Systems. 64: 120–140. arXiv:1705.04587. doi:10.1007/s00224-019-09910-6. S2CID 67168004. 6. Ashok, Pradeesha; Kolay, Sudeshna; Meesum, S.M.; Saurabh, Saket (January 2017). "Parameterized complexity of Strip Packing and Minimum Volume Packing". Theoretical Computer Science. 661: 56–64. doi:10.1016/j.tcs.2016.11.034. 7. Martello, Silvano; Monaci, Michele; Vigo, Daniele (1 August 2003). "An Exact Approach to the Strip-Packing Problem". INFORMS Journal on Computing. 15 (3): 310–319. doi:10.1287/ijoc.15.3.310.16082. ISSN 1091-9856. 8. Steinberg, A. (March 1997). "A Strip-Packing Algorithm with Absolute Performance Bound 2". SIAM Journal on Computing. 26 (2): 401–409. doi:10.1137/S0097539793255801. 9. Coffman Jr., Edward G.; Garey, M. R.; Johnson, David S.; Tarjan, Robert Endre (1980). "Performance Bounds for Level-Oriented Two-Dimensional Packing Algorithms". SIAM J. Comput. 9 (4): 808–826. doi:10.1137/0209062. 10. Sleator, Daniel Dominic (1980). "A 2.5 Times Optimal Algorithm for Packing in Two Dimensions". Inf. Process. Lett. 10: 37–40. doi:10.1016/0020-0190(80)90121-0. 11. Golan, Igal (August 1981). "Performance Bounds for Orthogonal Oriented Two-Dimensional Packing Algorithms". SIAM Journal on Computing. 10 (3): 571–582. doi:10.1137/0210042. 12. Baker, Brenda S; Brown, Donna J; Katseff, Howard P (December 1981). "A 5/4 algorithm for two-dimensional packing". Journal of Algorithms. 2 (4): 348–368. doi:10.1016/0196-6774(81)90034-1. 13. Schiermeyer, Ingo (1994). "Reverse-Fit: A 2-optimal algorithm for packing rectangles". Algorithms — ESA '94. Lecture Notes in Computer Science. Vol. 855. Springer Berlin Heidelberg. pp. 290–299. doi:10.1007/bfb0049416. ISBN 978-3-540-58434-6. 14. Kenyon, Claire; Rémila, Eric (November 2000). "A Near-Optimal Solution to a Two-Dimensional Cutting Stock Problem". Mathematics of Operations Research. 25 (4): 645–656. doi:10.1287/moor.25.4.645.12118. S2CID 5361969. 15. Harren, Rolf; van Stee, Rob (2009). "Improved Absolute Approximation Ratios for Two-Dimensional Packing Problems". Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 12th International Workshop, {APPROX} 2009, and 13th International Workshop, {RANDOM} 2009, Berkeley, CA, USA, August 21–23, 2009. Proceedings. Lecture Notes in Computer Science. 5687: 177–189. Bibcode:2009LNCS.5687..177H. doi:10.1007/978-3-642-03685-9_14. ISBN 978-3-642-03684-2. 16. Jansen, Klaus; Solis-Oba, Roberto (August 2009). "Rectangle packing with one-dimensional resource augmentation". Discrete Optimization. 6 (3): 310–323. doi:10.1016/j.disopt.2009.04.001. 17. Bougeret, Marin; Dutot, Pierre-Francois; Jansen, Klaus; Robenek, Christina; Trystram, Denis (5 April 2012). "Approximation Algorithms for Multiple Strip Packing and Scheduking Parallel Jobs in Platforms". Discrete Mathematics, Algorithms and Applications. 03 (4): 553–586. doi:10.1142/S1793830911001413. 18. Sviridenko, Maxim (January 2012). "A note on the Kenyon–Remila strip-packing algorithm". Information Processing Letters. 112 (1–2): 10–12. doi:10.1016/j.ipl.2011.10.003. 19. Jansen, Klaus; Rau, Malin (2019). "Closing the Gap for Pseudo-Polynomial Strip Packing". 27th Annual European Symposium on Algorithms (ESA 2019). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. 144: 62:1–62:14. doi:10.4230/LIPIcs.ESA.2019.62. S2CID 24303167. 20. Jansen, Klaus; Thöle, Ralf (January 2010). "Approximation Algorithms for Scheduling Parallel Jobs". SIAM Journal on Computing. 39 (8): 3571–3615. doi:10.1137/080736491. 21. Nadiradze, Giorgi; Wiese, Andreas (21 December 2015). "On approximating strip packing with a better ratio than 3/2". Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics: 1491–1510. doi:10.1137/1.9781611974331.ch102. ISBN 978-1-61197-433-1. 22. Gálvez, Waldo; Grandoni, Fabrizio; Ingala, Salvatore; Khan, Arindam (2016). "Improved Pseudo-Polynomial-Time Approximation for Strip Packing". 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. 65: 9:1–9:14. doi:10.4230/LIPIcs.FSTTCS.2016.9. S2CID 3205478. 23. Jansen, Klaus; Rau, Malin (29–31 March 2017). "Improved Approximation for Two Dimensional Strip Packing with Polynomial Bounded Width". {WALCOM:} Algorithms and Computation, 11th International Conference and Workshops, {WALCOM} 2017, Hsinchu, Taiwan. Lecture Notes in Computer Science. 10167: 409–420. arXiv:1610.04430. doi:10.1007/978-3-319-53925-6_32. ISBN 978-3-319-53924-9. S2CID 15768136. 24. Baker, Brenda S.; Schwarz, Jerald S. (1 August 1983). "Shelf Algorithms for Two-Dimensional Packing Problems". SIAM Journal on Computing. 12 (3): 508–525. doi:10.1137/0212033. ISSN 0097-5397. 25. Csirik, János; Woeginger, Gerhard J. (28 August 1997). "Shelf algorithms for on-line strip packing". Information Processing Letters. 63 (4): 171–175. doi:10.1016/S0020-0190(97)00120-8. ISSN 0020-0190. 26. Hurink, Johann L.; Paulus, Jacob Jan (2007). "Online Algorithm for Parallel Job Scheduling and Strip Packing". WAOA 2007 - Approximation and Online Algorithms. Lecture Notes in Computer Science. Springer Berlin Heidelberg. 4927: 67–74. doi:10.1007/978-3-540-77918-6_6. ISBN 978-3-540-77917-9. 27. Ye, Deshi; Han, Xin; Zhang, Guochuan (1 May 2009). "A note on online strip packing". Journal of Combinatorial Optimization. 17 (4): 417–423. doi:10.1007/s10878-007-9125-x. ISSN 1573-2886. S2CID 37635252. 28. Han, Xin; Iwama, Kazuo; Ye, Deshi; Zhang, Guochuan (2007). "Strip Packing vs. Bin Packing". Algorithmic Aspects in Information and Management. Lecture Notes in Computer Science. Springer Berlin Heidelberg. 4508: 358–367. arXiv:cs/0607046. doi:10.1007/978-3-540-72870-2_34. ISBN 978-3-540-72868-9. S2CID 580. 29. Seiden, Steven S. (2001). "On the Online Bin Packing Problem". Automata, Languages and Programming. Lecture Notes in Computer Science. Springer Berlin Heidelberg. 2076: 237–248. doi:10.1007/3-540-48224-5_20. ISBN 978-3-540-42287-7. 30. Brown, Donna J.; Baker, Brenda S.; Katseff, Howard P. (1 November 1982). "Lower bounds for on-line two-dimensional packing algorithms". Acta Informatica. 18 (2): 207–225. doi:10.1007/BF00264439. hdl:2142/74223. ISSN 1432-0525. S2CID 21170278. 31. Johannes, Berit (1 October 2006). "Scheduling parallel jobs to minimize the makespan" (PDF). Journal of Scheduling. 9 (5): 433–452. doi:10.1007/s10951-006-8497-6. hdl:20.500.11850/36804. ISSN 1099-1425. S2CID 18819458. 32. Hurink, J. L.; Paulus, J. J. (1 January 2008). "Online scheduling of parallel jobs on two machines is 2-competitive". Operations Research Letters. 36 (1): 51–56. doi:10.1016/j.orl.2007.06.001. ISSN 0167-6377. S2CID 15561044. 33. Kern, Walter; Paulus, Jacob Jan (2009). "A note on the lower bound for online strip packing". Operations Research Letters. 34. Balogh, János; Békési, József; Galambos, Gábor (6 July 2012). "New lower bounds for certain classes of bin packing algorithms". Theoretical Computer Science. 440–441: 1–13. doi:10.1016/j.tcs.2012.04.017. ISSN 0304-3975. 35. Yu, Guosong; Mao, Yanling; Xiao, Jiaoliao (1 May 2016). "A new lower bound for online strip packing". European Journal of Operational Research. 250 (3): 754–759. doi:10.1016/j.ejor.2015.10.012. ISSN 0377-2217.
Wikipedia
SYZ conjecture The SYZ conjecture is an attempt to understand the mirror symmetry conjecture, an issue in theoretical physics and mathematics. The original conjecture was proposed in a paper by Strominger, Yau, and Zaslow, entitled "Mirror Symmetry is T-duality".[1] String theory Fundamental objects • String • Cosmic string • Brane • D-brane Perturbative theory • Bosonic • Superstring (Type I, Type II, Heterotic) Non-perturbative results • S-duality • T-duality • U-duality • M-theory • F-theory • AdS/CFT correspondence Phenomenology • Phenomenology • Cosmology • Landscape Mathematics • Geometric Langlands correspondence • Mirror symmetry • Monstrous moonshine • Vertex algebra Related concepts • Theory of everything • Conformal field theory • Quantum gravity • Supersymmetry • Supergravity • Twistor string theory • N = 4 supersymmetric Yang–Mills theory • Kaluza–Klein theory • Multiverse • Holographic principle Theorists • Aganagić • Arkani-Hamed • Atiyah • Banks • Berenstein • Bousso • Cleaver • Curtright • Dijkgraaf • Distler • Douglas • Duff • Dvali • Ferrara • Fischler • Friedan • Gates • Gliozzi • Gopakumar • Green • Greene • Gross • Gubser • Gukov • Guth • Hanson • Harvey • Hořava • Horowitz • Gibbons • Kachru • Kaku • Kallosh • Kaluza • Kapustin • Klebanov • Knizhnik • Kontsevich • Klein • Linde • Maldacena • Mandelstam • Marolf • Martinec • Minwalla • Moore • Motl • Mukhi • Myers • Nanopoulos • Năstase • Nekrasov • Neveu • Nielsen • van Nieuwenhuizen • Novikov • Olive • Ooguri • Ovrut • Polchinski • Polyakov • Rajaraman • Ramond • Randall • Randjbar-Daemi • Roček • Rohm • Sagnotti • Scherk • Schwarz • Seiberg • Sen • Shenker • Siegel • Silverstein • Sơn • Staudacher • Steinhardt • Strominger • Sundrum • Susskind • 't Hooft • Townsend • Trivedi • Turok • Vafa • Veneziano • Verlinde • Verlinde • Wess • Witten • Yau • Yoneya • Zamolodchikov • Zamolodchikov • Zaslow • Zumino • Zwiebach • History • Glossary Along with the homological mirror symmetry conjecture, it is one of the most explored tools applied to understand mirror symmetry in mathematical terms. While the homological mirror symmetry is based on homological algebra, the SYZ conjecture is a geometrical realization of mirror symmetry. Formulation In string theory, mirror symmetry relates type IIA and type IIB theories. It predicts that the effective field theory of type IIA and type IIB should be the same if the two theories are compactified on mirror pair manifolds. The SYZ conjecture uses this fact to realize mirror symmetry. It starts from considering BPS states of type IIA theories compactified on X, especially 0-branes that have moduli space X. It is known that all of the BPS states of type IIB theories compactified on Y are 3-branes. Therefore, mirror symmetry will map 0-branes of type IIA theories into a subset of 3-branes of type IIB theories. By considering supersymmetric conditions, it has been shown that these 3-branes should be special Lagrangian submanifolds.[2][3] On the other hand, T-duality does the same transformation in this case, thus "mirror symmetry is T-duality". Mathematical statement The initial proposal of the SYZ conjecture by Strominger, Yau, and Zaslow, was not given as a precise mathematical statement.[1] One part of the mathematical resolution of the SYZ conjecture is to, in some sense, correctly formulate the statement of the conjecture itself. There is no agreed upon precise statement of the conjecture within the mathematical literature, but there is a general statement that is expected to be close to the correct formulation of the conjecture, which is presented here.[4][5] This statement emphasizes the topological picture of mirror symmetry, but does not precisely characterise the relationship between the complex and symplectic structures of the mirror pairs, or make reference to the associated Riemannian metrics involved. SYZ Conjecture: Every 6-dimensional Calabi–Yau manifold $X$ has a mirror 6-dimensional Calabi–Yau manifold ${\hat {X}}$ such that there are continuous surjections $f:X\to B$, ${\hat {f}}:{\hat {X}}\to B$ to a compact topological manifold $B$ of dimension 3, such that 1. There exists a dense open subset $B_{\text{reg}}\subset B$ on which the maps $f,{\hat {f}}$ are fibrations by nonsingular special Lagrangian 3-tori. Furthermore for every point $b\in B_{\text{reg}}$, the torus fibres $f^{-1}(b)$ and ${\hat {f}}^{-1}(b)$ should be dual to each other in some sense, analogous to duality of Abelian varieties. 2. For each $b\in B\backslash B_{\text{reg}}$, the fibres $f^{-1}(b)$ and ${\hat {f}}^{-1}(b)$ should be singular 3-dimensional special Lagrangian submanifolds of $X$ and ${\hat {X}}$ respectively. The situation in which $B_{\text{reg}}=B$ so that there is no singular locus is called the semi-flat limit of the SYZ conjecture, and is often used as a model situation to describe torus fibrations. The SYZ conjecture can be shown to hold in some simple cases of semi-flat limits, for example given by Abelian varieties and K3 surfaces which are fibred by elliptic curves. It is expected that the correct formulation of the SYZ conjecture will differ somewhat from the statement above. For example the possible behaviour of the singular set $B\backslash B_{\text{reg}}$ is not well understood, and this set could be quite large in comparison to $B$. Mirror symmetry is also often phrased in terms of degenerating families of Calabi–Yau manifolds instead of for a single Calabi–Yau, and one might expect the SYZ conjecture to reformulated more precisely in this language.[4] Relation to homological mirror symmetry conjecture The SYZ mirror symmetry conjecture is one possible refinement of the original mirror symmetry conjecture relating Hodge numbers of mirror Calabi–Yau manifolds. The other is Kontsevich's homological mirror symmetry conjecture (HMS conjecture). These two conjectures encode the predictions of mirror symmetry in different ways: homological mirror symmetry in an algebraic way, and the SYZ conjecture in a geometric way.[6] There should be a relationship between these three interpretations of mirror symmetry, but it is not yet known whether they should be equivalent or one proposal is stronger than the other. Progress has been made toward showing under certain assumptions that homological mirror symmetry implies Hodge theoretic mirror symmetry.[7] Nevertheless, in simple settings there are clear ways of relating the SYZ and HMS conjectures. The key feature of HMS is that the conjecture relates objects (either submanifolds or sheaves) on mirror geometric spaces, so the required input to try to understand or prove the HMS conjecture includes a mirror pair of geometric spaces. The SYZ conjecture predicts how these mirror pairs should arise, and so whenever an SYZ mirror pair is found, it is a good candidate to try and prove the HMS conjecture on this pair. To relate the SYZ and HMS conjectures, it is convenient to work in the semi-flat limit. The important geometric feature of a pair of Lagrangian torus fibrations $X,{\hat {X}}\to B$ which encodes mirror symmetry is the dual torus fibres of the fibration. Given a Lagrangian torus $T\subset X$, the dual torus is given by the Jacobian variety of $T$, denoted ${\hat {T}}=\mathrm {Jac} (T)$. This is again a torus of the same dimension, and the duality is encoded in the fact that $\mathrm {Jac} (\mathrm {Jac} (T))=T$ so $T$ and ${\hat {T}}$ are indeed dual under this construction. The Jacobian variety ${\hat {T}}$ has the important interpretation as the moduli space of line bundles on $T$. This duality and the interpretation of the dual torus as a moduli space of sheaves on the original torus is what allows one to interchange the data of submanifolds and subsheaves. There are two simple examples of this phenomenon: • If $p\in X$ is a point which lies inside some fibre $p\in T\subset X$ of the special Lagrangian torus fibration, then since $T=\mathrm {Jac} ({\hat {T}})$, the point $p$ corresponds to a line bundle supported on ${\hat {T}}\subset {\hat {X}}$. If one chooses a Lagrangian section $s:B\to X$ such that $s(B)=L$ is a Lagrangian submanifold of $X$, then precisely since $s$ chooses one point in each torus fibre of the SYZ fibration, this Lagrangian section is mirror dual to a choice of line bundle structure supported on each torus fibre of the mirror manifold ${\hat {X}}$, and consequently a line bundle on the total space of ${\hat {X}}$, the simplest example of a coherent sheaf appearing in the derived category of the mirror manifold. If the mirror torus fibrations are not in the semi-flat limit, then special care must be taken when crossing over singular set of the base $B$. • Another example of a Lagrangian submanifold is the torus fibre itself, and one sees that if the entire torus is taken as the Lagrangian $T\subset X$, with the added data of a flat unitary line bundle over it, as is often necessary in homological mirror symmetry, then in the dual torus ${\hat {T}}\subset {\hat {X}}$ this corresponds to a single point which represents that line bundle over the torus. If one takes the skyscraper sheaf supported on that point in the dual torus, then we see torus fibres of the SYZ fibration get sent to skyscraper sheaves supported on points in the mirror torus fibre. These two examples produce the most extreme kinds of coherent sheaf, locally free sheaves (of rank 1) and torsion sheaves supported on points. By more careful construction one can build up more complicated examples of coherent sheaves, analogous to building a coherent sheaf using the torsion filtration. As a simple example, a Lagrangian multisection (a union of k Lagrangian sections) should be mirror dual to a rank k vector bundle on the mirror manifold, but one must take care to account for instanton corrections by counting holomorphic discs which are bounded by the multisection, in the sense of Gromov-Witten theory. In this way enumerative geometry becomes important for understanding how mirror symmetry interchanges dual objects. By combining the geometry of mirror fibrations in the SYZ conjecture with a detailed understanding of enumerative invariants and the structure of the singular set of the base $B$, it is possible to use the geometry of the fibration to build the isomorphism of categories from the Lagrangian submanifolds of $X$ to the coherent sheaves of ${\hat {X}}$, the map $\mathrm {Fuk} (X)\to \mathrm {D} ^{b}\mathrm {Coh} ({\hat {X}})$. By repeating this same discussion in reverse using the duality of the torus fibrations, one similarly can understand coherent sheaves on $X$ in terms of Lagrangian submanifolds of ${\hat {X}}$, and hope to get a complete understanding of how the HMS conjecture relates to the SYZ conjecture. References 1. Strominger, Andrew; Yau, Shing-Tung; Zaslow, Eric (1996), "Mirror symmetry is T-duality", Nuclear Physics B, 479 (1–2): 243–259, arXiv:hep-th/9606040, Bibcode:1996NuPhB.479..243S, doi:10.1016/0550-3213(96)00434-8, S2CID 14586676. 2. Becker, Katrin; Becker, Melanie; Strominger, Andrew (1995), "Fivebranes, membranes and non-perturbative string theory", Nuclear Physics B, 456 (1–2): 130–152, arXiv:hep-th/9507158, Bibcode:1995NuPhB.456..130B, doi:10.1016/0550-3213(95)00487-1, S2CID 14043557. 3. Harvey, Reese; Lawson, H. Blaine, Jr. (1982), "Calibrated geometries", Acta Mathematica, 148 (1): 47–157, doi:10.1007/BF02392726{{citation}}: CS1 maint: multiple names: authors list (link). 4. Gross, M., Huybrechts, D. and Joyce, D., 2012. Calabi-Yau manifolds and related geometries: lectures at a summer school in Nordfjordeid, Norway, June 2001. Springer Science & Business Media. 5. Gross, M., 2012. Mirror symmetry and the Strominger-Yau-Zaslow conjecture. Current Developments in Mathematics, 2012(1), pp.133-191. 6. Bejleri, D., 2016, July. The SYZ conjecture via homological mirror symmetry. In Superschool on Derived Categories and D-branes (pp. 163-182). Springer, Cham. 7. Ganatra, S., Perutz, T. and Sheridan, N., 2015. Mirror symmetry: from categories to curve counts. arXiv preprint arXiv:1510.03839.
Wikipedia
Stromquist moving-knives procedure The Stromquist moving-knives procedure is a procedure for envy-free cake-cutting among three players. It is named after Walter Stromquist who presented it in 1980.[1] This procedure was the first envy-free moving knife procedure devised for three players. It requires four knives but only two cuts, so each player receives a single connected piece. There is no natural generalization to more than three players which divides the cake without extra cuts. The resulting partition is not necessarily efficient.[2]: 120–121  Procedure A referee moves a sword from left to right over the cake, hypothetically dividing it into small left piece and a large right piece. Each player moves a knife over the right piece, always keeping it parallel to the sword. The players must move their knives in a continuous manner, without making any "jumps".[3] When any player shouts "cut", the cake is cut by the sword and by whichever of the players' knives happens to be the central one of the three (that is, the second in order from the sword). Then the cake is divided in the following way: • The piece to the left of the sword, which we denote Left, is given to the player who first shouted "cut". We call this player the "shouter" and the other two players the "quieters". • The piece between the sword and the central knife, which we denote Middle, is given to the remaining player whose knife is closest to the sword. • The remaining piece, Right, is given to the third player. Strategy Each player can act in a way that guarantees that—according to their own measure—no other player receives more than them: • Always hold your knife such that it divides the part to the right of the sword to two pieces that are equal in your eyes (hence, your knife initially divides the entire cake to two equal parts and then moves rightwards as the sword moves rightwards). • Shout 'cut' when Left becomes equal to the piece you are about to receive if you remain quiet (i.e. if your knife is leftmost, shout 'cut' if Left=Middle; if your knife is rightmost, shout if Left=Right; if your knife is central, shout 'cut' if Left=Middle=Right). Analysis We now prove that any player using the above strategy receives an envy-free share. First, consider the two quieters. Each of them receives a piece that contains their own knife, so they do not envy each other. Additionally, because they remained quiet, the piece they receive is larger in their eyes than Left, so they also don't envy the shouter. The shouter receives Left, which is equal to the piece they could receive by remaining silent and larger than the third piece, hence the shouter does not envy any of the quieters. Following this strategy each person gets the largest or one of the largest pieces by their own valuation and therefore the division is envy-free. The same analysis shows that the division is envy-free even in the somewhat degenerate case when there are two shouters, and the leftmost piece is given to any of them. Dividing a 'bad' cake The moving-knives procedure can be adapted for chore division - dividing a cake with a negative value.[4]: exercise 5.11  See also • The Fair pie-cutting procedure provides a simpler solution to the same problem, using only 3 rotating knives, when the cake is a 1-dimensional circle ("pie"), • The Robertson–Webb rotating-knife procedure provides an even simpler solution, using only 1 rotating knife, when the cake is 2-dimensional. • Moving-knife procedure References 1. Stromquist, Walter (1980). "How to Cut a Cake Fairly". The American Mathematical Monthly. 87 (8): 640–644. doi:10.2307/2320951. JSTOR 2320951. 2. Brams, Steven J.; Taylor, Alan D. (1996). Fair division: from cake-cutting to dispute resolution. Cambridge University Press. ISBN 0-521-55644-9. 3. The importance of this continuity is explained here: "Stromquist's 3 knives procedure". Math Overflow. Retrieved 14 September 2014. 4. Robertson, Jack; Webb, William (1998). Cake-Cutting Algorithms: Be Fair If You Can. Natick, Massachusetts: A. K. Peters. ISBN 978-1-56881-076-8. LCCN 97041258. OL 2730675W.
Wikipedia
Stromquist–Woodall theorem The Stromquist–Woodall theorem is a theorem in fair division and measure theory. Informally, it says that, for any cake, for any n people with different tastes, and for any fraction w, there exists a subset of the cake that all people value at exactly a fraction w of the total cake value, and it can be cut using at most $2n-2$ cuts.[1] The theorem is about a circular 1-dimensional cake (a "pie"). Formally, it can be described as the interval [0,1] in which the two endpoints are identified. There are n continuous measures over the cake: $V_{1},\ldots ,V_{n}$; each measure represents the valuations of a different person over subsets of the cake. The theorem says that, for every weight $w\in [0,1]$, there is a subset $C_{w}$, which all people value at exactly $w$: $\forall i=1,\ldots ,n:\,\,\,\,\,V_{i}(C_{w})=w$, where $C_{w}$ is a union of at most $n-1$ intervals. This means that $2n-2$ cuts are sufficient for cutting the subset $C_{w}$. If the cake is not circular (that is, the endpoints are not identified), then $C_{w}$ may be the union of up to $n$ intervals, in case one interval is adjacent to 0 and one other interval is adjacent to 1. Proof sketch Let $W\subseteq [0,1]$ be the subset of all weights for which the theorem is true. Then: 1. $1\in W$. Proof: take $C_{1}:=C$ (recall that the value measures are normalized such that all partners value the entire cake as 1). 2. If $w\in W$, then also $1-w\in W$. Proof: take $C_{1-w}:=C\smallsetminus C_{w}$. If $C_{w}$ is a union of $n-1$ intervals in a circle, then $C_{1-w}$ is also a union of $n-1$ intervals. 3. $W$ is a closed set. This is easy to prove, since the space of unions of $n-1$ intervals is a compact set under a suitable topology. 4. If $w\in W$, then also $w/2\in W$. This is the most interesting part of the proof; see below. From 1-4, it follows that $W=[0,1]$. In other words, the theorem is valid for every possible weight. Proof sketch for part 4 • Assume that $C_{w}$ is a union of $n-1$ intervals and that all $n$ partners value it as exactly $w$. • Define the following function on the cake, $f:C\to \mathbb {R} ^{n}$: $f(t)=(t,t^{2},\ldots ,t^{n})\,\,\,\,\,\,t\in [0,1]$ • Define the following measures on $\mathbb {R} ^{n}$: $U_{i}(Y)=V_{i}(f^{-1}(Y)\cap C_{w})\,\,\,\,\,\,\,\,\,Y\subseteq \mathbb {R} ^{n}$ • Note that $f^{-1}(\mathbb {R} ^{n})=C$. Hence, for every partner $i$: $U_{i}(\mathbb {R} ^{n})=w$. • Hence, by the Stone–Tukey theorem, there is a hyper-plane that cuts $\mathbb {R} ^{n}$ to two half-spaces, $H,H'$, such that: $\forall i=1,\ldots ,n:\,\,\,\,\,U_{i}(H)=U_{i}(H')=w/2$ • Define $M=f^{-1}(H)\cap C_{w}$ and $M'=f^{-1}(H')\cap C_{w}$. Then, by the definition of the $U_{i}$: $\forall i=1,\ldots ,n:\,\,\,\,\,V_{i}(M)=V_{i}(M')=w/2$ • The set $C_{w}$ has $n-1$ connected components (intervals). Hence, its image $f(C_{w})$ also has $n-1$ connected components (1-dimensional curves in $\mathbb {R} ^{n}$). • The hyperplane that forms the boundary between $H$ and $H'$ intersects $f(C_{w})$ in at most $n$ points. Hence, the total number of connected components (curves) in $H\cap f(C_{w})$ and $H'\cap f(C_{w})$ is $2n-1$. Hence, one of these must have at most $n-1$ components. • Suppose it is $H$ that has at most $n-1$ components (curves). Hence, $M$ has at most $n-1$ components (intervals). • Hence, we can take $C_{w/2}=M$. This proves that $w\in W$. Tightness proof Stromquist and Woodall prove that the number $n-1$ is tight if the weight $w$ is either irrational, or rational with a reduced fraction $r/s$ such that $s\geq n$. Proof sketch for $w=1/n$ • Choose $(n-1)(n+1)$ equally-spaced points along the circle; call them $P_{1},\ldots ,P_{(n-1)(n+1)}$. • Define $n-1$ measures in the following way. Measure $i$ is concentrated in small neighbourhoods of the following $(n+1)$ points: $P_{i},P_{i+(n-1)},\ldots ,P_{i+n(n-1)}$. So, near each point $P_{i+k(n-1)}$, there is a fraction $1/(n+1)$ of the measure $u_{i}$. • Define the $n$-th measure as proportional to the length measure. • Every subset whose consensus value is $1/n$, must touch at least two points for each of the first $n-1$ measures (since the value near each single point is $1/(n+1)$ which is slightly less than the required $1/n$). Hence, it must touch at least $2(n-1)$ points. • On the other hand, every subset whose consensus value is $1/n$, must have total length $1/n$ (because of the $n$-th measure). The number of "gaps" between the points is $1/{\big (}(n+1)(n-1){\big )}$; hence the subset can contain at most $n-1$ gaps. • The consensus subset must touch $2(n-1)$ points but contain at most $n-1$ gaps; hence it must contain at least $n-1$ intervals. See also • Fair cake-cutting • Fair pie-cutting • Exact division • Stone–Tukey theorem References 1. Stromquist, Walter; Woodall, D.R (1985). "Sets on which several measures agree". Journal of Mathematical Analysis and Applications. 108: 241–248. doi:10.1016/0022-247x(85)90021-6.
Wikipedia
Frobenius pseudoprime In number theory, a Frobenius pseudoprime is a pseudoprime, whose definition was inspired by the quadratic Frobenius test described by Jon Grantham in a 1998 preprint and published in 2000.[1][2] Frobenius pseudoprimes can be defined with respect to polynomials of degree at least 2, but they have been most extensively studied in the case of quadratic polynomials.[3][4] Frobenius pseudoprimes w.r.t. quadratic polynomials Definition of Frobenius pseudoprimes with respect to a monic quadratic polynomial $x^{2}-Px+Q$, where the discriminant $D=P^{2}-4Q$ is not a square, can be expressed in terms of Lucas sequences $U_{n}(P,Q)$ and $V_{n}(P,Q)$ as follows. A composite number n is a Frobenius $(P,Q)$ pseudoprime if and only if $(1)\qquad \gcd(n,2QD)=1,$ $(2)\qquad U_{n-\delta }(P,Q)\equiv 0{\pmod {n}},$ and $(3)\qquad V_{n-\delta }(P,Q)\equiv 2Q^{(1-\delta )/2}{\pmod {n}},$ where $\delta =\left({\tfrac {D}{n}}\right)$ is the Jacobi symbol. When condition (2) is satisfied, condition (3) becomes equivalent to $(3')\qquad V_{n}(P,Q)\equiv P{\pmod {n}}.$ Therefore, Frobenius $(P,Q)$ pseudoprime n can be equivalently defined by conditions (1-2) and (3), or by conditions (1-2) and (3′). Since conditions (2) and (3) hold for all primes which satisfy the simple condition (1), they can be used as a probable prime test. (If condition (1) fails, either the greatest common divisor is less than n, in which case it is a non-trivial factor and n is composite, or the GCD equals n, in which case you should try different parameters P and Q which are not multiples of n.) Relations to other pseudoprimes Every Frobenius $(P,Q)$ pseudoprime is also • a Lucas pseudoprime with parameters $(P,Q)$, since it is defined by conditions (1) and (2);[2][3][5] • a Dickson pseudoprime with parameters $(P,Q)$, since it is defined by conditions (1) and (3');[5] • a Fermat pseudoprime base $|Q|$ when $|Q|>1$. Converse of none of these statements is true, making the Frobenius $(P,Q)$ pseudoprimes a proper subset of each of the sets of Lucas pseudoprimes and Dickson pseudoprimes with parameters $(P,Q)$, and Fermat pseudoprimes base $|Q|$ when $|Q|>1$. Furthermore, it follows that for the same parameters $(P,Q)$, a composite number is a Frobenius pseudoprime if and only if it is both Lucas and Dickson pseudoprime. In other words, for every fixed pair of parameters $(P,Q)$, the set of Frobenius pseudoprimes equals the intersection of the sets of Lucas and Dickson pseudoprimes. While each Frobenius $(P,Q)$ pseudoprime is a Lucas pseudoprime, it is not necessarily a strong Lucas pseudoprime. For example, 6721 is the first Frobenius pseudoprime for $(P,Q)=(1,-1)$, which is not a strong Lucas pseudoprime. Every Frobenius pseudoprime to $x^{3}-x-1$ is also a restricted Perrin pseudoprime. Analogous statements hold for other cubic polynomials of the form $x^{3}-rx^{2}+sx-1$.[2] Examples Frobenius pseudoprimes with respect to the Fibonacci polynomial $x^{2}-x-1$ are determined in terms of the Fibonacci numbers $F_{n}=U_{n}(1,-1)$ and Lucas numbers $L_{n}=V_{n}(1,-1)$. Such Frobenius pseudoprimes form the sequence: 4181, 5777, 6721, 10877, 13201, 15251, 34561, 51841, 64079, 64681, 67861, 68251, 75077, 90061, 96049, 97921, 100127, 113573, 118441, 146611, 161027, 162133, 163081, 186961, 197209, 219781, 231703, 252601, 254321, 257761, 268801, 272611, 283361, 302101, 303101, 330929, 399001, 430127, 433621, 438751, 489601, ... (sequence A212424 in the OEIS). While 323 is the first Lucas pseudoprime with respect to the Fibonacci polynomial $x^{2}-x-1$, the first Frobenius pseudoprime with respect to the same polynomial is 4181 (Grantham stated it as 5777[2] but multiple authors have noted this is incorrect and is instead the first pseudoprime with $\left({\tfrac {5}{n}}\right)=-1$ for this polynomial[3]). Another case, Frobenius pseudoprimes with respect to the quadratic polynomial $x^{2}-3x-1$ can be determined using the Lucas $(3,-1)$ sequence and are: 119, 649, 1189, 4187, 12871, 14041, 16109, 23479, 24769, 28421, 31631, 34997, 38503, 41441, 48577, 50545, 56279, 58081, 59081, 61447, 75077, 91187, 95761, 96139, 116821, 127937, 146329, 148943, 150281, 157693, 170039, 180517, 188501, 207761, 208349, 244649, 281017, 311579, 316409, 349441, 350173, 363091, 371399, 397927, 423721, 440833, 459191, 473801, 479119, 493697, ... (sequence A327655 in the OEIS) In this case, the first Frobenius pseudoprime with respect to the quadratic polynomial $x^{2}-3x-1$ is 119, which is also the first Lucas pseudoprime with respect to the same polynomial. Besides, $\left({\tfrac {13}{119}}\right)=-1$. The quadratic polynomial $x^{2}-3x-5$, i.e. $(P,Q)=(3,-5)$, has sparser pseudoprimes as compared to many other simple quadratics. Using the same process as above, we get the sequence: 13333, 44801, 486157, 1615681, 3125281, 4219129, 9006401, 12589081, 13404751, 15576571, 16719781, …. Notice there are only 3 such pseudoprimes below 500000, while there are many Frobenius (1, −1) and (3, −1) pseudoprimes below 500000. Every entry in this sequence is a Fermat pseudoprime to base 5 as well as a Lucas (3, −5) pseudoprime, but the converse is not true: 642001 is both a psp-5 and a Lucas (3,-5) pseudoprime, but is not a Frobenius (3, −5) pseudoprime. (note that Lucas pseudoprime for a (P, Q) pair need not to be a Fermat pseudoprime for base |Q|, e.g. 14209 is a Lucas (1, −3) pseudoprime, but not a Fermat pseudoprime for base 3. Strong Frobenius pseudoprimes Strong Frobenius pseudoprimes are also defined.[2] Details on implementation for quadratic polynomials can be found in Crandall and Pomerance.[3] By imposing the restrictions that $\delta =-1$ and $Q\neq \pm 1$, the authors of [6] show how to choose $P$ and $Q$ such that there are only five odd, composite numbers less than $10^{15}$ for which (3) holds, that is, for which $V_{n+1}\equiv 2Q{\pmod {n}}$. Pseudoprimality tests The conditions defining Frobenius pseudoprime can be used for testing a given number n for probable primality. Often such tests do not rely on fixed parameters $(P,Q)$, but rather select them in a certain way depending on the input number n in order to decrease the proportion of false positives, i.e., composite numbers that pass the test. Sometimes such composite numbers are commonly called Frobenius pseudoprimes, although they may correspond to different parameters. Using parameter selection ideas first laid out in Baillie and Wagstaff (1980)[7] as part of the Baillie–PSW primality test and used by Grantham in his quadratic Frobenius test,[8] one can create even better quadratic tests. In particular, it was shown that choosing parameters from quadratic non-residues modulo n (based on the Jacobi symbol) makes far stronger tests, and is one reason for the success of the Baillie–PSW primality test. For instance, for the parameters (P,2), where P is the first odd integer that satisfies $\left({\tfrac {D}{n}}\right)=-1$, there are no pseudoprimes below $2^{64}$. Yet another test is proposed by Khashin.[9] For a given non-square number n, it first computes a parameter c as the smallest odd prime having Jacobi symbol $\left({\tfrac {c}{n}}\right)=-1$, and then verifies the congruence: $(1+{\sqrt {c}})^{n}\equiv (1-{\sqrt {c}}){\pmod {n}}$. While all prime n pass this test, a composite n passes it if and only if n is a Frobenius pseudoprime for $(P,Q)=(2,1-c)$. Similar to the above example, Khashin notes that no pseudoprime has been found for his test. He further shows that any that exist under 260 must have a factor less than 19 or have c > 128. Properties The computational cost of the Frobenius pseudoprimality test with respect to quadratic polynomials is roughly three times the cost of a strong pseudoprimality test (e.g. a single round of the Miller–Rabin primality test), 1.5 times that of a Lucas pseudoprimality test, and slightly more than a Baillie–PSW primality test. Note that the quadratic Frobenius test is stronger than the Lucas test. For example, 1763 is a Lucas pseudoprime to (P, Q) = (3, -1) since U1764(3,-1) ≡ 0 (mod 1763) (U(3,-1) is given in OEIS: A006190), and it also passes the Jacobi step since $\left({\tfrac {13}{1763}}\right)=-1$, but it fails the Frobenius test to x2 - 3x - 1. This property can be clearly seen when the algorithm is formulated as shown in Crandall and Pomerance Algorithm 3.6.9[3] or as shown by Loebenberger,[4] as the algorithm does a Lucas test followed by an additional check for the Frobenius condition. While the quadratic Frobenius test does not have formal error bounds beyond that of the Lucas test, it can be used as the basis for methods with much smaller error bounds. Note that these have more steps, additional requirements, and non-negligible additional computation beyond what is described on this page. It is important to note that the error bounds for these methods do not apply to the standard or strong Frobenius tests with fixed values of (P,Q) described on this page. Based on this idea of pseudoprimes, algorithms with strong worst-case error bounds can be built. The quadratic Frobenius test,[8] using a quadratic Frobenius test plus other conditions, has a bound of ${\tfrac {1}{7710}}$. Müller in 2001 proposed the MQFT test with bounds of essentially ${\tfrac {1}{131040^{t}}}$.[10] Damgård and Frandsen in 2003 proposed the EQFT with a bound of essentially ${\tfrac {256}{{331776}^{t}}}$.[11] Seysen in 2005 proposed the SQFT test with a bound of ${\tfrac {1}{{4096}^{t}}}$ and a SQFT3 test with a bound of ${\tfrac {16}{336442^{t}}}$. [12] Given the same computational effort, these offer better worst-case bounds than the commonly used Miller–Rabin primality test. See also • Pseudoprime • Lucas pseudoprime • Ferdinand Georg Frobenius • Quadratic Frobenius test References 1. Grantham, Jon (1998). Frobenius pseudoprimes (Report). preprint. 2. Grantham, Jon (2001). "Frobenius pseudoprimes". Mathematics of Computation. 70 (234): 873–891. arXiv:1903.06820. Bibcode:2001MaCom..70..873G. doi:10.1090/S0025-5718-00-01197-2. 3. Crandall, Richard; Pomerance, Carl (2005). Prime numbers: A computational perspective (2nd ed.). Springer-Verlag. ISBN 978-0-387-25282-7. 4. Loebenberger, Daniel (2008). "A Simple Derivation for the Frobenius Pseudoprime Test" (PDF). IACR Cryptology ePrint Archive. 2008. 5. Rotkiewicz, Andrzej (2003). "Lucas and Frobenius pseudoprimes" (PDF). Annales Mathematicae Silesianae. Wydawnictwo Uniwersytetu Śląskiego. 17: 17–39. 6. Robert Baillie; Andrew Fiori; Samuel S. Wagstaff, Jr. (July 2021). "Strengthening the Baillie-PSW Primality Test". Mathematics of Computation. 90 (330): 1931–1955. arXiv:2006.14425. doi:10.1090/mcom/3616. ISSN 0025-5718. S2CID 220055722. 7. Baillie, Robert; Wagstaff, Samuel S. Jr. (October 1980). "Lucas Pseudoprimes" (PDF). Mathematics of Computation. 35 (152): 1391–1417. doi:10.1090/S0025-5718-1980-0583518-6. MR 0583518. 8. Grantham, Jon (1998). "A Probable Prime Test With High Confidence". Journal of Number Theory. 72 (1): 32–47. arXiv:1903.06823. CiteSeerX 10.1.1.56.8827. doi:10.1006/jnth.1998.2247. S2CID 119640473. 9. Khashin, Sergey (July 2013). "Counterexamples for Frobenius primality test". arXiv:1307.7920 [math.NT]. 10. Müller, Siguna (2001). "A Probable Prime Test with Very High Confidence for N Equiv 1 Mod 4". Proceedings of the 7th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology. ASIACRYPT. pp. 87–106. doi:10.1007/3-540-45682-1_6. ISBN 3-540-42987-5. 11. Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg (October 2006). "An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate" (PDF). Journal of Cryptology. 19 (4): 489–520. doi:10.1007/s00145-006-0332-x. S2CID 34417193. 12. Seysen, Martin. A Simplified Quadratic Frobenius Primality Test, 2005. External links • Weisstein, Eric W. "Frobenius Pseudoprime". MathWorld. • Weisstein, Eric W. "Strong Frobenius Pseudoprime". MathWorld. • Jacobsen, Dana Pseudoprime Statistics, Tables, and Data (data for Frobenius (1,-1) and (3,-5) pseudoprimes) Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Strong Nash equilibrium In game theory a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members.[1] While the Nash concept of stability defines equilibrium only in terms of unilateral deviations, strong Nash equilibrium allows for deviations by every conceivable coalition.[2] This equilibrium concept is particularly useful in areas such as the study of voting systems, in which there are typically many more players than possible outcomes, and so plain Nash equilibria are far too abundant. Strong Nash equilibrium A solution concept in game theory Relationship Subset ofEvolutionarily stable strategy (if the strong Nash equilibrium is not also weak) Significance Used forAll non-cooperative games of more than 2 players The strong Nash concept is criticized as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto-efficient. As a result of these requirements, Strong Nash rarely exists in games interesting enough to deserve study. Nevertheless, it is possible for there to be multiple strong Nash equilibria. For instance, in Approval voting, there is always a strong Nash equilibrium for any Condorcet winner that exists, but this is only unique (apart from inconsequential changes) when there is a majority Condorcet winner. A relatively weaker yet refined Nash stability concept is called coalition-proof Nash equilibrium (CPNE) [2] in which the equilibria are immune to multilateral deviations that are self-enforcing. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE.[3] Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size k. CPNE is related to the theory of the core. Confusingly, the concept of a strong Nash equilibrium is unrelated to that of a weak Nash equilibrium. That is, a Nash equilibrium can be both strong and weak, either, or neither. References 1. R. Aumann (1959), Acceptable points in general cooperative n-person games in "Contributions to the Theory of Games IV", Princeton Univ. Press, Princeton, N.J.. 2. B. D. Bernheim; B. Peleg; M. D. Whinston (1987), "Coalition-Proof Equilibria I. Concepts", Journal of Economic Theory, 42: 1–12, doi:10.1016/0022-0531(87)90099-8. 3. D. Moreno; J. Wooders (1996), "Coalition-Proof Equilibrium", Games and Economic Behavior, 17: 80–112, doi:10.1006/game.1996.0095, hdl:10016/4408. Topics in game theory Definitions • Congestion game • Cooperative game • Determinacy • Escalation of commitment • Extensive-form game • First-player and second-player win • Game complexity • Graphical game • Hierarchy of beliefs • Information set • Normal-form game • Preference • Sequential game • Simultaneous game • Simultaneous action selection • Solved game • Succinct game Equilibrium concepts • Bayesian Nash equilibrium • Berge equilibrium • Core • Correlated equilibrium • Epsilon-equilibrium • Evolutionarily stable strategy • Gibbs equilibrium • Mertens-stable equilibrium • Markov perfect equilibrium • Nash equilibrium • Pareto efficiency • Perfect Bayesian equilibrium • Proper equilibrium • Quantal response equilibrium • Quasi-perfect equilibrium • Risk dominance • Satisfaction equilibrium • Self-confirming equilibrium • Sequential equilibrium • Shapley value • Strong Nash equilibrium • Subgame perfection • Trembling hand Strategies • Backward induction • Bid shading • Collusion • Forward induction • Grim trigger • Markov strategy • Dominant strategies • Pure strategy • Mixed strategy • Strategy-stealing argument • Tit for tat Classes of games • Bargaining problem • Cheap talk • Global game • Intransitive game • Mean-field game • Mechanism design • n-player game • Perfect information • Large Poisson game • Potential game • Repeated game • Screening game • Signaling game • Stackelberg competition • Strictly determined game • Stochastic game • Symmetric game • Zero-sum game Games • Go • Chess • Infinite chess • Checkers • Tic-tac-toe • Prisoner's dilemma • Gift-exchange game • Optional prisoner's dilemma • Traveler's dilemma • Coordination game • Chicken • Centipede game • Lewis signaling game • Volunteer's dilemma • Dollar auction • Battle of the sexes • Stag hunt • Matching pennies • Ultimatum game • Rock paper scissors • Pirate game • Dictator game • Public goods game • Blotto game • War of attrition • El Farol Bar problem • Fair division • Fair cake-cutting • Cournot game • Deadlock • Diner's dilemma • Guess 2/3 of the average • Kuhn poker • Nash bargaining game • Induction puzzles • Trust game • Princess and monster game • Rendezvous problem Theorems • Arrow's impossibility theorem • Aumann's agreement theorem • Folk theorem • Minimax theorem • Nash's theorem • Negamax theorem • Purification theorem • Revelation principle • Sprague–Grundy theorem • Zermelo's theorem Key figures • Albert W. Tucker • Amos Tversky • Antoine Augustin Cournot • Ariel Rubinstein • Claude Shannon • Daniel Kahneman • David K. Levine • David M. Kreps • Donald B. Gillies • Drew Fudenberg • Eric Maskin • Harold W. Kuhn • Herbert Simon • Hervé Moulin • John Conway • Jean Tirole • Jean-François Mertens • Jennifer Tour Chayes • John Harsanyi • John Maynard Smith • John Nash • John von Neumann • Kenneth Arrow • Kenneth Binmore • Leonid Hurwicz • Lloyd Shapley • Melvin Dresher • Merrill M. Flood • Olga Bondareva • Oskar Morgenstern • Paul Milgrom • Peyton Young • Reinhard Selten • Robert Axelrod • Robert Aumann • Robert B. Wilson • Roger Myerson • Samuel Bowles • Suzanne Scotchmer • Thomas Schelling • William Vickrey Miscellaneous • All-pay auction • Alpha–beta pruning • Bertrand paradox • Bounded rationality • Combinatorial game theory • Confrontation analysis • Coopetition • Evolutionary game theory • First-move advantage in chess • Game Description Language • Game mechanics • Glossary of game theory • List of game theorists • List of games in game theory • No-win situation • Solving chess • Topological game • Tragedy of the commons • Tyranny of small decisions
Wikipedia
Strong cardinal In set theory, a strong cardinal is a type of large cardinal. It is a weakening of the notion of a supercompact cardinal. Formal definition If λ is any ordinal, κ is λ-strong means that κ is a cardinal number and there exists an elementary embedding j from the universe V into a transitive inner model M with critical point κ and $V_{\lambda }\subseteq M$ That is, M agrees with V on an initial segment. Then κ is strong means that it is λ-strong for all ordinals λ. Relationship with other large cardinals By definitions, strong cardinals lie below supercompact cardinals and above measurable cardinals in the consistency strength hierarchy. κ is κ-strong if and only if it is measurable. If κ is strong or λ-strong for λ ≥ κ+2, then the ultrafilter U witnessing that κ is measurable will be in Vκ+2 and thus in M. So for any α < κ, we have that there exist an ultrafilter U in j(Vκ) − j(Vα), remembering that j(α) = α. Using the elementary embedding backwards, we get that there is an ultrafilter in Vκ − Vα. So there are arbitrarily large measurable cardinals below κ which is regular, and thus κ is a limit of κ-many measurable cardinals. Strong cardinals also lie below superstrong cardinals and Woodin cardinals in consistency strength. However, the least strong cardinal is larger than the least superstrong cardinal. Every strong cardinal is strongly unfoldable and therefore totally indescribable. References • Kanamori, Akihiro (2003). The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.). Springer. ISBN 3-540-00384-3.
Wikipedia
Strong coloring In graph theory, a strong coloring, with respect to a partition of the vertices into (disjoint) subsets of equal sizes, is a (proper) vertex coloring in which every color appears exactly once in every part. A graph is strongly k-colorable if, for each partition of the vertices into sets of size k, it admits a strong coloring. When the order of the graph G is not divisible by k, we add isolated vertices to G just enough to make the order of the new graph G′ divisible by k. In that case, a strong coloring of G′ minus the previously added isolated vertices is considered a strong coloring of G. [1] The strong chromatic number sχ(G) of a graph G is the least k such that G is strongly k-colorable. A graph is strongly k-chromatic if it has strong chromatic number k. Some properties of sχ(G): 1. sχ(G) > Δ(G). 2. sχ(G) ≤ 3 Δ(G) − 1.[2] 3. Asymptotically, sχ(G) ≤ 11 Δ(G) / 4 + o(Δ(G)).[3] Here, Δ(G) is the maximum degree. Strong chromatic number was independently introduced by Alon (1988)[4][5] and Fellows (1990).[6] Related problems Given a graph and a partition of the vertices, an independent transversal is a set U of non-adjacent vertices such that each part contains exactly one vertex of U. A strong coloring is equivalent to a partition of the vertices into disjoint independent-transversals (each independent-transversal is a single "color"). This is in contrast to graph coloring, which is a partition of the vertices of a graph into a given number of independent sets, without the requirement that these independent sets be transversals. To illustrate the difference between these concepts, consider a faculty with several departments, where the dean wants to construct a committee of faculty members. But some faculty members are in conflict and will not sit in the same committee. If the "conflict" relations are represented by the edges of a graph, then: • An independent set is a committee with no conflict. • An independent transversal is a committee with no conflict, with exactly one member from each department. • A graph coloring is a partitioning of the faculty members into committees with no conflict. • A strong coloring is a partitioning of the faculty members into committees with no conflict and with exactly one member from each department. Thus this problem is sometimes called the happy dean problem.[7] References 1. Jensen, Tommy R. (1995). Graph coloring problems. Toft, Bjarne. New York: Wiley. ISBN 0-471-02865-7. OCLC 30353850. 2. Haxell, P. E. (2004-11-01). "On the Strong Chromatic Number". Combinatorics, Probability and Computing. 13 (6): 857–865. doi:10.1017/S0963548304006157. ISSN 0963-5483. S2CID 6387358. 3. Haxell, P. E. (2008). "An improved bound for the strong chromatic number". Journal of Graph Theory. 58 (2): 148–158. doi:10.1002/jgt.20300. ISSN 1097-0118. S2CID 20457776. 4. Alon, N. (1988-10-01). "The linear arboricity of graphs". Israel Journal of Mathematics. 62 (3): 311–325. doi:10.1007/BF02783300. ISSN 0021-2172. 5. Alon, Noga (1992). "The strong chromatic number of a graph". Random Structures & Algorithms. 3 (1): 1–7. doi:10.1002/rsa.3240030102. 6. Fellows, Michael R. (1990-05-01). "Transversals of Vertex Partitions in Graphs". SIAM Journal on Discrete Mathematics. 3 (2): 206–215. doi:10.1137/0403018. ISSN 0895-4801. 7. Haxell, P. (2011-11-01). "On Forming Committees". The American Mathematical Monthly. 118 (9): 777–788. doi:10.4169/amer.math.monthly.118.09.777. ISSN 0002-9890. S2CID 27202372.
Wikipedia
Strong connectivity augmentation Strong connectivity augmentation is a computational problem in the mathematical study of graph algorithms, in which the input is a directed graph and the goal of the problem is to add a small number of edges, or a set of edges with small total weight, so that the added edges make the graph into a strongly connected graph. The strong connectivity augmentation problem was formulated by Kapali Eswaran and Robert Tarjan (1976). They showed that a weighted version of the problem is NP-complete, but the unweighted problem can be solved in linear time.[1] Subsequent research has considered the approximation ratio and parameterized complexity of the weighted problem.[2][3] Unweighted version In the unweighted strong connectivity augmentation problem, the input is a directed graph and the goal is to add as few edges as possible to it to make the result into a strongly connected graph. The algorithm for the unweighted case by Eswaran and Tarjan considers the condensation of the given directed graph, a directed acyclic graph that has one vertex per strongly connected component of the given graph. Letting $s$ denote the number of source vertices in the condensation (strongly connected components with at least one outgoing edge but no incoming edges), $t$ denote the number of sink vertices in the condensation (strongly connected components with incoming but no outgoing edges), and $q$ denote the number of isolated vertices in the condensation (strongly connected components with neither incoming nor outgoing edges), they observe that the number of edges to be added is necessarily at least $\max(s+q,t+q)$. This follows because $s+q$ edges need to be added to provide an incoming edge for each source or isolated vertex, and symmetrically at least $t+q$ edges need to be added to provide an outgoing edge for each sink or isolated vertex. Their algorithm for the problem finds a set of exactly $\max(s+q,t+q)$ edges to add to the graph to make it strongly connected.[1] Their algorithm uses a depth-first search on the condensation to find a collection of pairs of sources and sinks, with the following properties:[1] • The source of each pair can reach the sink of the pair by a path in the given graph. • Every source that is not in one of the pairs can reach a sink in one of the pairs. • Every sink that is not in one of the pairs can be reached from a source in one of the pairs. A minor error in the part of their algorithm that finds the pairs of sources and sinks was later found and corrected.[4] Once these pairs have been found, one can obtain a strong connectivity augmentation by adding three sets of edges:[1] • The first set of edges connects the pairs and the isolated vertices of the condensation into a single cycle, consisting of one edge per pair or isolated vertex. • The second set of edges each connect one of the remaining sinks to one of the remaining sources (chosen arbitrarily). This links both the source and the sink to the cycle of pairs and isolated vertices at a cost of one edge per source-sink pair. • Once the previous two sets of edges have either exhausted all sources or exhausted all sinks, the third set of edges links each remaining source or sink to this cycle by adding one more edge per source or sink. The total number of edges in these three sets is $\max(s+q,t+q)$.[1] Weighted and parameterized version The weighted version of the problem, in which each edge that might be added has a given weight and the goal is to choose a set of added edges of minimum weight that makes the given graph strongly connected, is NP-complete.[1] An approximation algorithm with approximation ratio 2 was provided by Frederickson & Ja'Ja' (1981).[2] A parameterized and weighted version of the problem, in which one must add at most $k$ edges of minimum total weight to make the given graph strongly connected, is fixed-parameter tractable.[3] Bipartite version and grid bracing application If a square grid is made of rigid rods (the edges of the grid) connected to each other by flexible joints at the edges of the grid, then the overall structure can bend in many ways rather than remaining square. The grid bracing problem asks how to stabilize such a structure by adding additional cross bracing within some of its squares. This problem can be modeled using graph theory, by making a bipartite graph with a vertex for each row or column of squares in the grid, and an edge between two of these vertices when a square in a given row and column is cross-braced. If the cross-bracing within each square makes that completely rigid, then this graph is undirected, and represents a rigid structure if and only if it is a connected graph.[5] However, if squares are only partially braced (for instance by connecting two opposite corners by a string or wire that prevents expansive motion but does not prevent contractive motion), then the graph is directed, and represents a rigid structure if and only if it is a strongly connected graph.[6] An associated strong connectivity augmentation problem asks how to add more partial bracing to a grid that already has partial bracing in some of its squares. The existing partial bracing can be represented as a directed graph, and the additional partial bracing to be added should form a strong connectivity augmentation of that graph. In order to be able to translate a solution to the strong connectivity augmentation problem back to a solution of the original bracing problem, an extra restriction is required: each added edge must respect the bipartition of the original graph, and only connect row vertices with column vertices rather than attempting to connect rows to rows or columns to columns. This restricted version of the strong connectivity augmentation problem can again be solved in linear time.[7] References 1. Eswaran, Kapali P.; Tarjan, R. Endre (1976), "Augmentation problems", SIAM Journal on Computing, 5 (4): 653–665, doi:10.1137/0205044, MR 0449011 2. Frederickson, Greg N.; Ja'Ja', Joseph (1981), "Approximation algorithms for several graph augmentation problems", SIAM Journal on Computing, 10 (2): 270–283, doi:10.1137/0210019, MR 0615218 3. Klinkby, Kristine Vitting; Misra, Pranabendu; Saurabh, Saket (January 2021), "Strong connectivity augmentation is FPT", Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), Society for Industrial and Applied Mathematics, pp. 219–234, doi:10.1137/1.9781611976465.15 4. Raghavan, S. (2005), "A note on Eswaran and Tarjan's algorithm for the strong connectivity augmentation problem", in Golden, Bruce; Raghavan, S.; Wasil, Edward (eds.), The Next Wave in Computing, Optimization, and Decision Technologies, Operations Research/Computer Science Interfaces Series, vol. 29, Springer, pp. 19–26, doi:10.1007/0-387-23529-9_2 5. Graver, Jack E. (2001), "2.6 The solution to the grid problem", Counting on Frameworks: Mathematics to Aid the Design of Rigid Structures, The Dolciani Mathematical Expositions, vol. 25, Washington, DC: Mathematical Association of America, pp. 50–55, ISBN 0-88385-331-0, MR 1843781 6. Baglivo, Jenny A.; Graver, Jack E. (1983), "3.10 Bracing structures", Incidence and Symmetry in Design and Architecture, Cambridge Urban and Architectural Studies, Cambridge University Press, pp. 76–88, ISBN 9780521297844 7. Gabow, Harold N.; Jordán, Tibor (2000), "How to make a square grid framework with cables rigid", SIAM Journal on Computing, 30 (2): 649–680, doi:10.1137/S0097539798347189, MR 1769375
Wikipedia
Strong dual space In functional analysis and related areas of mathematics, the strong dual space of a topological vector space (TVS) $X$ is the continuous dual space $X^{\prime }$ of $X$ equipped with the strong (dual) topology or the topology of uniform convergence on bounded subsets of $X,$ where this topology is denoted by $b\left(X^{\prime },X\right)$ or $\beta \left(X^{\prime },X\right).$ The coarsest polar topology is called weak topology. The strong dual space plays such an important role in modern functional analysis, that the continuous dual space is usually assumed to have the strong dual topology unless indicated otherwise. To emphasize that the continuous dual space, $X^{\prime },$ has the strong dual topology, $X_{b}^{\prime }$ or $X_{\beta }^{\prime }$ may be written. Strong dual topology Throughout, all vector spaces will be assumed to be over the field $\mathbb {F} $ of either the real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} .$ Definition from a dual system Main article: Dual system Let $(X,Y,\langle \cdot ,\cdot \rangle )$ be a dual pair of vector spaces over the field $\mathbb {F} $ of real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} .$ For any $B\subseteq X$ and any $y\in Y,$ define $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |.$ Neither $X$ nor $Y$ has a topology so say a subset $B\subseteq X$ is said to be bounded by a subset $C\subseteq Y$ if $|y|_{B}<\infty $ for all $y\in C.$ So a subset $B\subseteq X$ is called bounded if and only if $\sup _{x\in B}|\langle x,y\rangle |<\infty \quad {\text{ for all }}y\in Y.$ This is equivalent to the usual notion of bounded subsets when $X$ is given the weak topology induced by $Y,$ which is a Hausdorff locally convex topology. Let ${\mathcal {B}}$ denote the family of all subsets $B\subseteq X$ bounded by elements of $Y$; that is, ${\mathcal {B}}$ is the set of all subsets $B\subseteq X$ such that for every $y\in Y,$ $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |<\infty .$ Then the strong topology $\beta (Y,X,\langle \cdot ,\cdot \rangle )$ on $Y,$ also denoted by $b(Y,X,\langle \cdot ,\cdot \rangle )$ or simply $\beta (Y,X)$ or $b(Y,X)$ if the pairing $\langle \cdot ,\cdot \rangle $ is understood, is defined as the locally convex topology on $Y$ generated by the seminorms of the form $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |,\qquad y\in Y,\qquad B\in {\mathcal {B}}.$ The definition of the strong dual topology now proceeds as in the case of a TVS. Note that if $X$ is a TVS whose continuous dual space separates point on $X,$ then $X$ is part of a canonical dual system $\left(X,X^{\prime },\langle \cdot ,\cdot \rangle \right)$ where $\left\langle x,x^{\prime }\right\rangle :=x^{\prime }(x).$ :=x^{\prime }(x).} In the special case when $X$ is a locally convex space, the strong topology on the (continuous) dual space $X^{\prime }$ (that is, on the space of all continuous linear functionals $f:X\to \mathbb {F} $) is defined as the strong topology $\beta \left(X^{\prime },X\right),$ and it coincides with the topology of uniform convergence on bounded sets in $X,$ i.e. with the topology on $X^{\prime }$ generated by the seminorms of the form $|f|_{B}=\sup _{x\in B}|f(x)|,\qquad {\text{ where }}f\in X^{\prime },$ where $B$ runs over the family of all bounded sets in $X.$ The space $X^{\prime }$ with this topology is called strong dual space of the space $X$ and is denoted by $X_{\beta }^{\prime }.$ Definition on a TVS Suppose that $X$ is a topological vector space (TVS) over the field $\mathbb {F} .$ Let ${\mathcal {B}}$ be any fundamental system of bounded sets of $X$; that is, ${\mathcal {B}}$ is a family of bounded subsets of $X$ such that every bounded subset of $X$ is a subset of some $B\in {\mathcal {B}}$; the set of all bounded subsets of $X$ forms a fundamental system of bounded sets of $X.$ A basis of closed neighborhoods of the origin in $X^{\prime }$ is given by the polars: $B^{\circ }:=\left\{x^{\prime }\in X^{\prime }:\sup _{x\in B}\left|x^{\prime }(x)\right|\leq 1\right\}$ as $B$ ranges over ${\mathcal {B}}$). This is a locally convex topology that is given by the set of seminorms on $X^{\prime }$: $\left|x^{\prime }\right|_{B}:=\sup _{x\in B}\left|x^{\prime }(x)\right|$ as $B$ ranges over ${\mathcal {B}}.$ If $X$ is normable then so is $X_{b}^{\prime }$ and $X_{b}^{\prime }$ will in fact be a Banach space. If $X$ is a normed space with norm $\|\cdot \|$ then $X^{\prime }$ has a canonical norm (the operator norm) given by $\left\|x^{\prime }\right\|:=\sup _{\|x\|\leq 1}\left|x^{\prime }(x)\right|$; the topology that this norm induces on $X^{\prime }$ is identical to the strong dual topology. Bidual See also: Banach space § Bidual, Reflexive space, and Semi-reflexive space The bidual or second dual of a TVS $X,$ often denoted by $X^{\prime \prime },$ is the strong dual of the strong dual of $X$: $X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)^{\prime }$ where $X_{b}^{\prime }$ denotes $X^{\prime }$ endowed with the strong dual topology $b\left(X^{\prime },X\right).$ Unless indicated otherwise, the vector space $X^{\prime \prime }$ is usually assumed to be endowed with the strong dual topology induced on it by $X_{b}^{\prime },$ in which case it is called the strong bidual of $X$; that is, $X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)_{b}^{\prime }$ where the vector space $X^{\prime \prime }$ is endowed with the strong dual topology $b\left(X^{\prime \prime },X_{b}^{\prime }\right).$ Properties Let $X$ be a locally convex TVS. • A convex balanced weakly compact subset of $X^{\prime }$ is bounded in $X_{b}^{\prime }.$[1] • Every weakly bounded subset of $X^{\prime }$ is strongly bounded.[2] • If $X$ is a barreled space then $X$'s topology is identical to the strong dual topology $b\left(X,X^{\prime }\right)$ and to the Mackey topology on $X.$ • If $X$ is a metrizable locally convex space, then the strong dual of $X$ is a bornological space if and only if it is an infrabarreled space, if and only if it is a barreled space.[3] • If $X$ is Hausdorff locally convex TVS then $\left(X,b\left(X,X^{\prime }\right)\right)$ is metrizable if and only if there exists a countable set ${\mathcal {B}}$ of bounded subsets of $X$ such that every bounded subset of $X$ is contained in some element of ${\mathcal {B}}.$[4] • If $X$ is locally convex, then this topology is finer than all other ${\mathcal {G}}$-topologies on $X^{\prime }$ when considering only ${\mathcal {G}}$'s whose sets are subsets of $X.$ • If $X$ is a bornological space (e.g. metrizable or LF-space) then $X_{b(X^{\prime },X)}^{\prime }$ is complete. If $X$ is a barrelled space, then its topology coincides with the strong topology $\beta \left(X,X^{\prime }\right)$ on $X$ and with the Mackey topology on generated by the pairing $\left(X,X^{\prime }\right).$ Examples If $X$ is a normed vector space, then its (continuous) dual space $X^{\prime }$ with the strong topology coincides with the Banach dual space $X^{\prime }$; that is, with the space $X^{\prime }$ with the topology induced by the operator norm. Conversely $\left(X,X^{\prime }\right).$-topology on $X$ is identical to the topology induced by the norm on $X.$ See also • Dual topology • Dual system • List of topologies – List of concrete topologies and topological spaces • Polar topology – Dual space topology of uniform convergence on some sub-collection of bounded subsets • Reflexive space – Locally convex topological vector space • Semi-reflexive space • Strong topology • Topologies on spaces of linear maps References 1. Schaefer & Wolff 1999, p. 141. 2. Schaefer & Wolff 1999, p. 142. 3. Schaefer & Wolff 1999, p. 153. 4. Narici & Beckenstein 2011, pp. 225–273. Bibliography • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. • Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Duality and spaces of linear maps Basic concepts • Dual space • Dual system • Dual topology • Duality • Operator topologies • Polar set • Polar topology • Topologies on spaces of linear maps Topologies • Norm topology • Dual norm • Ultraweak/Weak-* • Weak • polar • operator • in Hilbert spaces • Mackey • Strong dual • polar topology • operator • Ultrastrong Main results • Banach–Alaoglu • Mackey–Arens Maps • Transpose of a linear map Subsets • Saturated family • Total set Other concepts • Biorthogonal system
Wikipedia
Strong duality Strong duality is a condition in mathematical optimization in which the primal optimal objective and the dual optimal objective are equal. This is as opposed to weak duality (the primal problem has optimal value smaller than or equal to the dual problem, in other words the duality gap is greater than or equal to zero). Characterizations Strong duality holds if and only if the duality gap is equal to 0. Sufficient conditions Sufficient conditions comprise: • $F=F^{**}$ where $F$ is the perturbation function relating the primal and dual problems and $F^{**}$ is the biconjugate of $F$ (follows by construction of the duality gap) • $F$ is convex and lower semi-continuous (equivalent to the first point by the Fenchel–Moreau theorem) • the primal problem is a linear optimization problem • Slater's condition for a convex optimization problem[1][2] See also • Convex optimization References 1. Borwein, Jonathan; Lewis, Adrian (2006). Convex Analysis and Nonlinear Optimization: Theory and Examples (2 ed.). Springer. ISBN 978-0-387-29570-1. 2. Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 3, 2011.
Wikipedia
Strong measure zero set In mathematical analysis, a strong measure zero set[1] is a subset A of the real line with the following property: for every sequence (εn) of positive reals there exists a sequence (In) of intervals such that |In| < εn for all n and A is contained in the union of the In. (Here |In| denotes the length of the interval In.) Every countable set is a strong measure zero set, and so is every union of countably many strong measure zero sets. Every strong measure zero set has Lebesgue measure 0. The Cantor set is an example of an uncountable set of Lebesgue measure 0 which is not of strong measure zero.[2] Borel's conjecture[1] states that every strong measure zero set is countable. It is now known that this statement is independent of ZFC (the Zermelo–Fraenkel axioms of set theory, which is the standard axiom system assumed in mathematics). This means that Borel's conjecture can neither be proven nor disproven in ZFC (assuming ZFC is consistent). Sierpiński proved in 1928 that the continuum hypothesis (which is now also known to be independent of ZFC) implies the existence of uncountable strong measure zero sets.[3] In 1976 Laver used a method of forcing to construct a model of ZFC in which Borel's conjecture holds.[4] These two results together establish the independence of Borel's conjecture. The following characterization of strong measure zero sets was proved in 1973: A set A ⊆ R has strong measure zero if and only if A + M ≠ R for every meagre set M ⊆ R.[5] This result establishes a connection to the notion of strongly meagre set, defined as follows: A set M ⊆ R is strongly meagre if and only if A + M ≠ R for every set A ⊆ R of Lebesgue measure zero. The dual Borel conjecture states that every strongly meagre set is countable. This statement is also independent of ZFC.[6] References 1. Borel, Émile (1919). "Sur la classification des ensembles de mesure nulle" (PDF). Bull. Soc. Math. France. 47: 97–125. doi:10.24033/bsmf.996. 2. Jech, Thomas (2003). Set Theory: The Third Millennium Edition, Revised and Expanded. Springer Monographs in Mathematics (3rd ed.). Springer. p. 539. ISBN 978-3540440857. 3. Sierpiński, W. (1928). "Sur un ensemble non denombrable, dont toute image continue est de mesure nulle" (PDF). Fundamenta Mathematicae (in French). 11 (1): 302–4. doi:10.4064/fm-11-1-302-303. 4. Laver, Richard (1976). "On the consistency of Borel's conjecture". Acta Math. 137 (1): 151–169. doi:10.1007/BF02392416. 5. Galvin, F.; Mycielski, J.; Solovay, R.M. (1973). "Strong measure zero sets". Notices of the American Mathematical Society. 26. 6. Carlson, Timothy J. (1993). "Strong measure zero and strongly meager sets". Proc. Amer. Math. Soc. 118 (2): 577–586. doi:10.1090/s0002-9939-1993-1139474-6. JSTOR 2160341.
Wikipedia
Strong partition cardinal In Zermelo–Fraenkel set theory without the axiom of choice a strong partition cardinal is an uncountable well-ordered cardinal $k$ such that every partition of the set $[k]^{k}$of size $k$ subsets of $k$ into less than $k$ pieces has a homogeneous set of size $k$. The existence of strong partition cardinals contradicts the axiom of choice. The Axiom of determinacy implies that ℵ1 is a strong partition cardinal. References • Henle, James M.; Kleinberg, Eugene M.; Watro, Ronald J. (1984), "On the ultrafilters and ultrapowers of strong partition cardinals", Journal of Symbolic Logic, 49 (4): 1268–1272, doi:10.2307/2274277, JSTOR 2274277, S2CID 45989875 • Apter, Arthur W.; Henle, James M.; Jackson, Stephen C. (1999), "The calculus of partition sequences, changing cofinalities, and a question of Woodin", Transactions of the American Mathematical Society, 352 (3): 969–1003, doi:10.1090/S0002-9947-99-02554-4, JSTOR 118097, MR 1695015.
Wikipedia
Strong perfect graph theorem In graph theory, the strong perfect graph theorem is a forbidden graph characterization of the perfect graphs as being exactly the graphs that have neither odd holes (odd-length induced cycles of length at least 5) nor odd antiholes (complements of odd holes). It was conjectured by Claude Berge in 1961. A proof by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas was announced in 2002[1] and published by them in 2006. The proof of the strong perfect graph theorem won for its authors a $10,000 prize offered by Gérard Cornuéjols of Carnegie Mellon University[2] and the 2009 Fulkerson Prize.[3] Statement A perfect graph is a graph in which, for every induced subgraph, the size of the maximum clique equals the minimum number of colors in a coloring of the graph; perfect graphs include many well-known graph classes including the bipartite graphs, chordal graphs, and comparability graphs. In his 1961 and 1963 works defining for the first time this class of graphs, Claude Berge observed that it is impossible for a perfect graph to contain an odd hole, an induced subgraph in the form of an odd-length cycle graph of length five or more, because odd holes have clique number two and chromatic number three. Similarly, he observed that perfect graphs cannot contain odd antiholes, induced subgraphs complementary to odd holes: an odd antihole with 2k + 1 vertices has clique number k and chromatic number k + 1, which is again impossible for perfect graphs. The graphs having neither odd holes nor odd antiholes became known as the Berge graphs. Berge conjectured that every Berge graph is perfect, or equivalently that the perfect graphs and the Berge graphs define the same class of graphs. This became known as the strong perfect graph conjecture, until its proof in 2002, when it was renamed the strong perfect graph theorem. Relation to the weak perfect graph theorem Another conjecture of Berge, proved in 1972 by László Lovász, is that the complement of every perfect graph is also perfect. This became known as the perfect graph theorem, or (to distinguish it from the strong perfect graph conjecture/theorem) the weak perfect graph theorem. Because Berge's forbidden graph characterization is self-complementary, the weak perfect graph theorem follows immediately from the strong perfect graph theorem. Proof ideas The proof of the strong perfect graph theorem by Chudnovsky et al. follows an outline conjectured in 2001 by Conforti, Cornuéjols, Robertson, Seymour, and Thomas, according to which every Berge graph either forms one of five types of basic building block (special classes of perfect graphs) or it has one of four different types of structural decomposition into simpler graphs. A minimally imperfect Berge graph cannot have any of these decompositions, from which it follows that no counterexample to the theorem can exist.[4] This idea was based on previous conjectured structural decompositions of similar type that would have implied the strong perfect graph conjecture but turned out to be false.[5] The five basic classes of perfect graphs that form the base case of this structural decomposition are the bipartite graphs, line graphs of bipartite graphs, complementary graphs of bipartite graphs, complements of line graphs of bipartite graphs, and double split graphs. It is easy to see that bipartite graphs are perfect: in any nontrivial induced subgraph, the clique number and chromatic number are both two and therefore both equal. The perfection of complements of bipartite graphs, and of complements of line graphs of bipartite graphs, are both equivalent to Kőnig's theorem relating the sizes of maximum matchings, maximum independent sets, and minimum vertex covers in bipartite graphs. The perfection of line graphs of bipartite graphs can be stated equivalently as the fact that bipartite graphs have chromatic index equal to their maximum degree, proven by Kőnig (1916). Thus, all four of these basic classes are perfect. The double split graphs are a relative of the split graphs that can also be shown to be perfect.[6] The four types of decompositions considered in this proof are 2-joins, complements of 2-joins, balanced skew partitions, and homogeneous pairs. A 2-join is a partition of the vertices of a graph into two subsets, with the property that the edges spanning the cut between these two subsets form two vertex-disjoint complete bipartite graphs. When a graph has a 2-join, it may be decomposed into induced subgraphs called "blocks", by replacing one of the two subsets of vertices by a shortest path within that subset that connects one of the two complete bipartite graphs to the other; when no such path exists, the block is formed instead by replacing one of the two subsets of vertices by two vertices, one for each complete bipartite subgraph. A 2-join is perfect if and only if its two blocks are both perfect. Therefore, if a minimally imperfect graph has a 2-join, it must equal one of its blocks, from which it follows that it must be an odd cycle and not Berge. For the same reason, a minimally imperfect graph whose complement has a 2-join cannot be Berge.[7] A skew partition is a partition of a graph's vertices into two subsets, one of which induces a disconnected subgraph and the other of which has a disconnected complement; Chvátal (1985) had conjectured that no minimal counterexample to the strong perfect graph conjecture could have a skew partition. Chudnovsky et al. introduced some technical constraints on skew partitions, and were able to show that Chvátal's conjecture is true for the resulting "balanced skew partitions". The full conjecture is a corollary of the strong perfect graph theorem.[8] A homogeneous pair is related to a modular decomposition of a graph. It is a partition of the graph into three subsets V1, V2, and V3 such that V1 and V2 together contain at least three vertices, V3 contains at least two vertices, and for each vertex v in V3 and each i in {1,2} either v is adjacent to all vertices in Vi or to none of them. It is not possible for a minimally imperfect graph to have a homogeneous pair.[9] Subsequent to the proof of the strong perfect graph conjecture, Chudnovsky (2006) simplified it by showing that homogeneous pairs could be eliminated from the set of decompositions used in the proof. The proof that every Berge graph falls into one of the five basic classes or has one of the four types of decomposition follows a case analysis, according to whether certain configurations exist within the graph: a "stretcher", a subgraph that can be decomposed into three induced paths subject to certain additional constraints, the complement of a stretcher, and a "proper wheel", a configuration related to a wheel graph, consisting of an induced cycle together with a hub vertex adjacent to at least three cycle vertices and obeying several additional constraints. For each possible choice of whether a stretcher or its complement or a proper wheel exists within the given Berge graph, the graph can be shown to be in one of the basic classes or to be decomposable.[10] This case analysis completes the proof. Notes 1. Mackenzie (2002); Cornuéjols (2002). 2. Mackenzie (2002). 3. "2009 Fulkerson Prizes" (PDF), Notices of the American Mathematical Society: 1475–1476, December 2011. 4. Cornuéjols (2002), Conjecture 5.1. 5. Reed (1986); Hougardy (1991); Rusu (1997); Roussel, Rusu & Thuillier (2009), section 4.6 "The first conjectures". 6. Roussel, Rusu & Thuillier (2009), Definition 4.39. 7. Cornuéjols & Cunningham (1985); Cornuéjols (2002), Theorem 3.2 and Corollary 3.3. 8. Seymour (2006); Roussel, Rusu & Thuillier (2009), section 4.7 "The skew partition"; Cornuéjols (2002), Theorems 4.1 and 4.2. 9. Chvátal & Sbihi (1987); Cornuéjols (2002), Theorem 4.10. 10. Cornuéjols (2002), Theorems 5.4, 5.5, and 5.6; Roussel, Rusu & Thuillier (2009), Theorem 4.42. References • Berge, Claude (1961), "Färbung von Graphen, deren sämtliche bzw. deren ungerade Kreise starr sind", Wiss. Z. Martin-Luther-Univ. Halle-Wittenberg Math.-Natur. Reihe, 10: 114. • Berge, Claude (1963), "Perfect graphs", Six Papers on Graph Theory, Calcutta: Indian Statistical Institute, pp. 1–21. • Chudnovsky, Maria (2006), "Berge trigraphs", Journal of Graph Theory, 53 (1): 1–55, doi:10.1002/jgt.20165, MR 2245543. • Chudnovsky, Maria; Robertson, Neil; Seymour, Paul; Thomas, Robin (2006), "The strong perfect graph theorem", Annals of Mathematics, 164 (1): 51–229, arXiv:math/0212070, doi:10.4007/annals.2006.164.51, MR 2233847. • Chudnovsky, Maria; Robertson, Neil; Seymour, Paul; Thomas, Robin (2003), "Progress on perfect graphs", Mathematical Programming, Series B., 97 (1–2): 405–422, CiteSeerX 10.1.1.137.3013, doi:10.1007/s10107-003-0449-8, MR 2004404. • Chvátal, Václav (1985), "Star-cutsets and perfect graphs", Journal of Combinatorial Theory, Series B, 39 (3): 189–199, doi:10.1016/0095-8956(85)90049-8, MR 0815391. • Chvátal, Václav; Sbihi, Najiba (1987), "Bull-free Berge graphs are perfect", Graphs and Combinatorics, 3 (2): 127–139, doi:10.1007/BF01788536, MR 0932129. • Cornuéjols, Gérard (2002), "The strong perfect graph conjecture", Proceedings of the International Congress of Mathematicians, Vol. III (Beijing, 2002) (PDF), Beijing: Higher Ed. Press, pp. 547–559, MR 1957560. • Cornuéjols, G.; Cunningham, W. H. (1985), "Compositions for perfect graphs", Discrete Mathematics, 55 (3): 245–254, doi:10.1016/S0012-365X(85)80001-7, MR 0802663. • Hougardy, S. (1991), Counterexamples to three conjectures concerning perfect graphs, Technical Report RR870-M, Grenoble, France: Laboratoire Artemis-IMAG, Universitá Joseph Fourier. As cited by Roussel, Rusu & Thuillier (2009). • Kőnig, Dénes (1916), "Gráfok és alkalmazásuk a determinánsok és a halmazok elméletére", Matematikai és Természettudományi Értesítő, 34: 104–119. • Lovász, László (1972a), "Normal hypergraphs and the perfect graph conjecture", Discrete Mathematics, 2 (3): 253–267, doi:10.1016/0012-365X(72)90006-4. • Lovász, László (1972b), "A characterization of perfect graphs", Journal of Combinatorial Theory, Series B, 13 (2): 95–98, doi:10.1016/0095-8956(72)90045-7. • Mackenzie, Dana (July 5, 2002), "Mathematics: Graph theory uncovers the roots of perfection", Science, 297 (5578): 38, doi:10.1126/science.297.5578.38, PMID 12098683. • Reed, B. A. (1986), A semi-strong perfect graph theorem, Ph.D. thesis, Montréal, Québec, Canada: Department of Computer Science, McGill University. As cited by Roussel, Rusu & Thuillier (2009). • Roussel, F.; Rusu, I.; Thuillier, H. (2009), "The strong perfect graph conjecture: 40 years of attempts, and its resolution", Discrete Mathematics, 309 (20): 6092–6113, doi:10.1016/j.disc.2009.05.024, MR 2552645. • Rusu, Irena (1997), "Building counterexamples", Discrete Mathematics, 171 (1–3): 213–227, doi:10.1016/S0012-365X(96)00081-7, MR 1454452. • Seymour, Paul (2006), "How the proof of the strong perfect graph conjecture was found" (PDF), Gazette des Mathématiciens (109): 69–83, MR 2245898. External links • The Strong Perfect Graph Theorem, Václav Chvátal • Weisstein, Eric W. "Strong Perfect Graph Theorem". MathWorld.
Wikipedia
Strong positional game A strong positional game (also called Maker-Maker game) is a kind of positional game.[1]: 9–12  Like most positional games, it is described by its set of positions ($X$) and its family of winning-sets (${\mathcal {F}}$- a family of subsets of $X$). It is played by two players, called First and Second, who alternately take previously-untaken positions. In a strong positional game, the winner is the first player who holds all the elements of a winning-set. If all positions are taken and no player wins, then it is a draw. Classic Tic-tac-toe is an example of a strong positional game. First player advantage In a strong positional game, Second cannot have a winning strategy. This can be proved by a strategy-stealing argument: if Second had a winning strategy, then First could have stolen it and win too, but this is impossible since there is only one winner. [1]: 9  Therefore, for every strong-positional game there are only two options: either First has a winning strategy, or Second has a drawing strategy. An interesting corollary is that, if a certain game does not have draw positions, then First always has a winning strategy. Comparison to Maker-Breaker game Every strong positional game has a variant that is a Maker-Breaker game. In that variant, only the first player ("Maker") can win by holding a winning-set. The second player ("Breaker") can win only by preventing Maker from holding a winning-set. For fixed $X$ and ${\mathcal {F}}$, the strong-positional variant is strictly harder for the first player, since in it, he needs to both "attack" (try to get a winning-set) and "defend" (prevent the second player from getting one), while in the maker-breaker variant, the first player can focus only on "attack". Hence, every winning-strategy of First in a strong-positional game is also a winning-strategy of Maker in the corresponding maker-breaker game. The opposite is not true. For example, in the maker-breaker variant of Tic-Tac-Toe, Maker has a winning strategy, but in its strong-positional (classic) variant, Second has a drawing strategy.[2] Similarly, the strong-positional variant is strictly easier for the second player: every winning strategy of Breaker in a maker-breaker game is also a drawing-strategy of Second in the corresponding strong-positional game, but the opposite is not true. The extra-set paradox Suppose First has a winning strategy. Now, we add a new set to ${\mathcal {F}}$. Contrary to intuition, it is possible that this new set will now destroy the winning strategy and make the game a draw. Intuitively, the reason is that First might have to spend some moves to prevent Second from owning this extra set.[3] The extra-set paradox does not appear in Maker-Breaker games. Examples The clique game The clique game is an example of a strong positional game. It is parametrized by two integers, n and N. In it: • $X$contains all edges of the complete graph on {1,...,N}; • ${\mathcal {F}}$contains all cliques of size n. According to Ramsey's theorem, there exists some number R(n,n) such that, for every N > R(n,n), in every two-coloring of the complete graph on {1,...,N}, one of the colors must contain a clique of size n. Therefore, by the above corollary, when N > R(n,n), First always has a winning strategy.[1]: 10  Multi-dimensional tic-tac-toe Consider the game of tic-tac-toe played in an d-dimensional cube of length n. By the Hales–Jewett theorem, when d is large enough (as a function of n), every 2-coloring of the cube-cells contains a monochromatic geometric line. Therefore, by the above corollary, First always has a winning strategy. Open questions Besides these existential results, there are few constructive results related to strong-positional games. For example, while it is known that the first player has a winning strategy in a sufficiently large clique game, no specific winning strategy is currently known.[1]: 11–12  References 1. Hefetz, Dan; Krivelevich, Michael; Stojaković, Miloš; Szabó, Tibor (2014). Positional Games. Oberwolfach Seminars. Vol. 44. Basel: Birkhäuser Verlag GmbH. ISBN 978-3-0348-0824-8. 2. Kruczek, Klay; Eric Sundberg (2010). "Potential-based strategies for tic-tac-toe on the integer latticed with numerous directions". The Electronic Journal of Combinatorics. 17: R5. 3. Beck, József (2008). Combinatorial Games: Tic-Tac-Toe Theory. Cambridge: Cambridge University Press. ISBN 978-0-521-46100-9.
Wikipedia
Strong product of graphs In graph theory, the strong product is a way of combining two graphs to make a larger graph. Two vertices are adjacent in the strong product when they come from pairs of vertices in the factor graphs that are either adjacent or identical. The strong product is one of several different graph product operations that have been studied in graph theory. The strong product of any two graphs can be constructed as the union of two other products of the same two graphs, the Cartesian product of graphs and the tensor product of graphs. An example of a strong product is the king's graph, the graph of moves of a chess king on a chessboard, which can be constructed as a strong product of path graphs. Decompositions of planar graphs and related graph classes into strong products have been used as a central tool to prove many other results about these graphs. Care should be exercised when encountering the term strong product in the literature, since it has also been used to denote the tensor product of graphs.[1] Definition and example The strong product G ⊠ H of graphs G and H is a graph such that[2] the vertex set of G ⊠ H is the Cartesian product V(G) × V(H); and distinct vertices (u,u' ) and (v,v' ) are adjacent in G ⊠ H if and only if: u = v and u' is adjacent to v', or u' = v' and u is adjacent to v, or u is adjacent to v and u' is adjacent to v'. It is the union of the Cartesian product and the tensor product. For example, the king's graph, a graph whose vertices are squares of a chessboard and whose edges represent possible moves of a chess king, is a strong product of two path graphs. Its horizontal edges come from the Cartesian product, and its diagonal edges come from the tensor product of the same two paths. Together, these two kinds of edges make up the entire strong product.[3] Properties and applications Every planar graph is a subgraph of a strong product of a path and a graph of treewidth at most six.[4][5] This result has been used to prove that planar graphs have bounded queue number,[4] small universal graphs and concise adjacency labeling schemes,[6][7][8][9] and bounded nonrepetitive chromatic number[10] and centered chromatic number.[11] This product structure can be found in linear time.[12][13] Beyond planar graphs, extensions of these results have been proven for graphs of bounded genus,[4][14] graphs with a forbidden minor that is an apex graph,[4] bounded-degree graphs with any forbidden minor,[15] and k-planar graphs.[16] The clique number of the strong product of any two graphs equals the product of the clique numbers of the two graphs.[17] If two graphs both have bounded twin-width, and in addition one of them has bounded degree, then their strong product also has bounded twin-width.[18] A leaf power is a graph formed from the leaves of a tree by making two leaves adjacent when their distance in the tree is below some threshold $k$. If $G$ is a $k$-leaf power of a tree $T$, then $T$ can be found as a subgraph of a strong product of $G$ with a $k$-vertex cycle. This embedding has been used in recognition algorithms for leaf powers.[19] The strong product of a 7-vertex cycle graph and a 4-vertex complete graph, $C_{7}\boxtimes K_{4}$, has been suggested as a possibility for a 10-chromatic biplanar graph that would improve the known bounds on the Earth–Moon problem; another suggested example is the graph obtained by removing any vertex from $C_{5}\boxtimes K_{4}$. In both cases, the number of vertices in these graphs is more than 9 times the size of their largest independent set, implying that their chromatic number is at least 10. However, it is not known whether these graphs are biplanar.[20] References 1. See page 2 of Lovász, László (1979), "On the Shannon Capacity of a Graph", IEEE Transactions on Information Theory, IT-25 (1): 1–7, doi:10.1109/TIT.1979.1055985. 2. Imrich, Wilfried; Klavžar, Sandi; Rall, Douglas F. (2008), Graphs and their Cartesian Product, A. K. Peters, ISBN 978-1-56881-429-2. 3. Berend, Daniel; Korach, Ephraim; Zucker, Shira (2005), "Two-anticoloring of planar and related graphs" (PDF), 2005 International Conference on Analysis of Algorithms, Discrete Mathematics & Theoretical Computer Science Proceedings, Nancy: Association for Discrete Mathematics & Theoretical Computer Science, pp. 335–341, MR 2193130. 4. Dujmović, Vida; Joret, Gwenaël; Micek, Piotr; Morin, Pat; Ueckerdt, Torsten; Wood, David R. (2020), "Planar graphs have bounded queue-number", Journal of the ACM, 67 (4): Art. 22, 38, arXiv:1904.04791, doi:10.1145/3385731, MR 4148600 5. Ueckerdt, Torsten; Wood, David R.; Yi, Wendy (2022), "An improved planar graph product structure theorem", Electronic Journal of Combinatorics, 29 (2), Paper No. 2.51, doi:10.37236/10614, MR 4441087, S2CID 236772054 6. Dujmović, Vida; Esperet, Louis; Gavoille, Cyril; Joret, Gwenaël; Micek, Piotr; Morin, Pat (2021), "Adjacency labelling for planar graphs (and beyond)", Journal of the ACM, 68 (6): Art. 42, 33, arXiv:2003.04280, doi:10.1145/3477542, MR 4402353 7. Gawrychowski, Pawel; Janczewski, Wojciech (2022), "Simpler adjacency labeling for planar graphs with B-trees", in Bringmann, Karl; Chan, Timothy (eds.), 5th Symposium on Simplicity in Algorithms, SOSA@SODA 2022, Virtual Conference, January 10-11, 2022, Society for Industrial and Applied Mathematics, pp. 24–36, doi:10.1137/1.9781611977066.3, S2CID 245738461 8. Esperet, Louis; Joret, Gwenaël; Morin, Pat (2020), Sparse universal graphs for planarity, arXiv:2010.05779 9. Huynh, Tony; Mohar, Bojan; Šámal, Robert; Thomassen, Carsten; Wood, David R. (2021), Universality in minor-closed graph classes, arXiv:2109.00327 10. Dujmović, Vida; Esperet, Louis; Joret, Gwenaël; Walczak, Bartosz; Wood, David R. (2020), "Planar graphs have bounded nonrepetitive chromatic number", Advances in Combinatorics: Paper No. 5, 11, MR 4125346 11. Dębski, Michał; Felsner, Stefan; Micek, Piotr; Schröder, Felix (2021), "Improved bounds for centered colorings", Advances in Combinatorics, Paper No. 8, arXiv:1907.04586, doi:10.19086/aic.27351, MR 4309118, S2CID 195874032 12. Morin, Pat (2021), "A fast algorithm for the product structure of planar graphs", Algorithmica, 83 (5): 1544–1558, arXiv:2004.02530, doi:10.1007/s00453-020-00793-5, MR 4242109, S2CID 254028754 13. Bose, Prosenjit; Morin, Pat; Odak, Saeed (2022), An optimal algorithm for the product structure of planar graphs, arXiv:2202.08870 14. Distel, Marc; Hickingbotham, Robert; Huynh, Tony; Wood, David R. (2022), "Improved product structure for graphs on surfaces", Discrete Mathematics & Theoretical Computer Science, 24 (2): Paper No. 6, arXiv:2112.10025, doi:10.46298/dmtcs.8877, MR 4504777, S2CID 245335306 15. Dujmović, Vida; Esperet, Louis; Morin, Pat; Walczak, Bartosz; Wood, David R. (2022), "Clustered 3-colouring graphs of bounded degree", Combinatorics, Probability and Computing, 31 (1): 123–135, arXiv:2002.11721, doi:10.1017/s0963548321000213, MR 4356460, S2CID 211532824 16. Dujmović, Vida; Morin, Pat; Wood, David R. (2019), Graph product structure for non-minor-closed classes, arXiv:1907.05168 17. Kozawa, Kyohei; Otachi, Yota; Yamazaki, Koichi (2014), "Lower bounds for treewidth of product graphs", Discrete Applied Mathematics, 162: 251–258, doi:10.1016/j.dam.2013.08.005, MR 3128527 18. Bonnet, Édouard; Geniet, Colin; Kim, Eun Jung; Thomassé, Stéphan; Watrigant, Rémi (2022), "Twin-width II: small classes", Combinatorial Theory, 2 (2): P10:1–P10:42, arXiv:2006.09877, doi:10.5070/C62257876, MR 4449818 19. Eppstein, David; Havvaei, Elham (2020), "Parameterized leaf power recognition via embedding into graph products", Algorithmica, 82 (8): 2337–2359, doi:10.1007/s00453-020-00720-8, MR 4132894, S2CID 254032445 20. Gethner, Ellen (2018), "To the Moon and beyond", in Gera, Ralucca; Haynes, Teresa W.; Hedetniemi, Stephen T. (eds.), Graph Theory: Favorite Conjectures and Open Problems, II, Problem Books in Mathematics, Springer International Publishing, pp. 115–133, doi:10.1007/978-3-319-97686-0_11, MR 3930641
Wikipedia
Mostow rigidity theorem In mathematics, Mostow's rigidity theorem, or strong rigidity theorem, or Mostow–Prasad rigidity theorem, essentially states that the geometry of a complete, finite-volume hyperbolic manifold of dimension greater than two is determined by the fundamental group and hence unique. The theorem was proven for closed manifolds by Mostow (1968) and extended to finite volume manifolds by Marden (1974) in 3 dimensions, and by Prasad (1973) in all dimensions at least 3. Gromov (1981) gave an alternate proof using the Gromov norm. Besson, Courtois & Gallot (1996) gave the simplest available proof. While the theorem shows that the deformation space of (complete) hyperbolic structures on a finite volume hyperbolic $n$-manifold (for $n>2$) is a point, for a hyperbolic surface of genus $g>1$ there is a moduli space of dimension $6g-6$ that parameterizes all metrics of constant curvature (up to diffeomorphism), a fact essential for Teichmüller theory. There is also a rich theory of deformation spaces of hyperbolic structures on infinite volume manifolds in three dimensions. The theorem The theorem can be given in a geometric formulation (pertaining to finite-volume, complete manifolds), and in an algebraic formulation (pertaining to lattices in Lie groups). Geometric form Let $\mathbb {H} ^{n}$ be the $n$-dimensional hyperbolic space. A complete hyperbolic manifold can be defined as a quotient of $\mathbb {H} ^{n}$ by a group of isometries acting freely and properly discontinuously (it is equivalent to define it as a Riemannian manifold with sectional curvature -1 which is complete). It is of finite volume if the integral of a volume form is finite (which is the case, for example, if it is compact). The Mostow rigidity theorem may be stated as: Suppose $M$ and $N$ are complete finite-volume hyperbolic manifolds of dimension $n\geq 3$. If there exists an isomorphism $f\colon \pi _{1}(M)\to \pi _{1}(N)$ then it is induced by a unique isometry from $M$ to $N$. Here $\pi _{1}(X)$ is the fundamental group of a manifold $X$. If $X$ is an hyperbolic manifold obtained as the quotient of $\mathbb {H} ^{n}$ by a group $\Gamma $ then $\pi _{1}(X)\cong \Gamma $. An equivalent statement is that any homotopy equivalence from $M$ to $N$ can be homotoped to a unique isometry. The proof actually shows that if $N$ has greater dimension than $M$ then there can be no homotopy equivalence between them. Algebraic form The group of isometries of hyperbolic space $\mathbb {H} ^{n}$ can be identified with the Lie group $\mathrm {PO} (n,1)$ (the projective orthogonal group of a quadratic form of signature $(n,1)$. Then the following statement is equivalent to the one above. Let $n\geq 3$ and $\Gamma $ and $\Lambda $ be two lattices in $\mathrm {PO} (n,1)$ and suppose that there is a group isomorphism $f\colon \Gamma \to \Lambda $. Then $\Gamma $ and $\Lambda $ are conjugate in $\mathrm {PO} (n,1)$. That is, there exists a $g\in \mathrm {PO} (n,1)$ such that $\Lambda =g\Gamma g^{-1}$. In greater generality Mostow rigidity holds (in its geometric formulation) more generally for fundamental groups of all complete, finite volume, non-positively curved (without Euclidean factors) locally symmetric spaces of dimension at least three, or in its algebraic formulation for all lattices in simple Lie groups not locally isomorphic to $\mathrm {SL} _{2}(\mathbb {R} )$. Applications It follows from the Mostow rigidity theorem that the group of isometries of a finite-volume hyperbolic n-manifold M (for n>2) is finite and isomorphic to $\operatorname {Out} (\pi _{1}(M))$. Mostow rigidity was also used by Thurston to prove the uniqueness of circle packing representations of triangulated planar graphs. A consequence of Mostow rigidity of interest in geometric group theory is that there exist hyperbolic groups which are quasi-isometric but not commensurable to each other. See also • Superrigidity, a stronger result for higher-rank spaces • Local rigidity, a result about deformations that are not necessarily lattices. References • Besson, Gérard; Courtois, Gilles; Gallot, Sylvestre (1996), "Minimal entropy and Mostow's rigidity theorems", Ergodic Theory and Dynamical Systems, 16 (4): 623–649, doi:10.1017/S0143385700009019, S2CID 122773907 • Gromov, Michael (1981), "Hyperbolic manifolds (according to Thurston and Jørgensen)", Bourbaki Seminar, Vol. 1979/80 (PDF), Lecture Notes in Math., vol. 842, Berlin, New York: Springer-Verlag, pp. 40–53, doi:10.1007/BFb0089927, ISBN 978-3-540-10292-2, MR 0636516, archived from the original on 2016-01-10 • Marden, Albert (1974), "The geometry of finitely generated kleinian groups", Annals of Mathematics, Second Series, 99 (3): 383–462, doi:10.2307/1971059, ISSN 0003-486X, JSTOR 1971059, MR 0349992, Zbl 0282.30014 • Mostow, G. D. (1968), "Quasi-conformal mappings in n-space and the rigidity of the hyperbolic space forms", Publ. Math. IHÉS, 34: 53–104, doi:10.1007/bf02684590, S2CID 55916797 • Mostow, G. D. (1973), Strong rigidity of locally symmetric spaces, Annals of mathematics studies, vol. 78, Princeton University Press, ISBN 978-0-691-08136-6, MR 0385004 • Prasad, Gopal (1973), "Strong rigidity of Q-rank 1 lattices", Inventiones Mathematicae, 21 (4): 255–286, Bibcode:1973InMat..21..255P, doi:10.1007/BF01418789, ISSN 0020-9910, MR 0385005, S2CID 55739204 • Spatzier, R. J. (1995), "Harmonic Analysis in Rigidity Theory", in Petersen, Karl E.; Salama, Ibrahim A. (eds.), Ergodic Theory and its Connection with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, Cambridge University Press, pp. 153–205, ISBN 0-521-45999-0. (Provides a survey of a large variety of rigidity theorems, including those concerning Lie groups, algebraic groups and dynamics of flows. Includes 230 references.) • Thurston, William (1978–1981), The geometry and topology of 3-manifolds, Princeton lecture notes. (Gives two proofs: one similar to Mostow's original proof, and another based on the Gromov norm)
Wikipedia
Ultrametric space In mathematics, an ultrametric space is a metric space in which the triangle inequality is strengthened to $d(x,z)\leq \max \left\{d(x,y),d(y,z)\right\}$. Sometimes the associated metric is also called a non-Archimedean metric or super-metric. Formal definition An ultrametric on a set M is a real-valued function $d\colon M\times M\rightarrow \mathbb {R} $ (where ℝ denote the real numbers), such that for all x, y, z ∈ M: 1. d(x, y) ≥ 0; 2. d(x, y) = d(y, x) (symmetry); 3. d(x, x) = 0; 4. if d(x, y) = 0 then x = y; 5. d(x, z) ≤ max {d(x, y), d(y, z)} (strong triangle inequality or ultrametric inequality). An ultrametric space is a pair (M, d) consisting of a set M together with an ultrametric d on M, which is called the space's associated distance function (also called a metric). If d satisfies all of the conditions except possibly condition 4 then d is called an ultrapseudometric on M. An ultrapseudometric space is a pair (M, d) consisting of a set M and an ultrapseudometric d on M.[1] In the case when M is an Abelian group (written additively) and d is generated by a length function $\|\cdot \|$ (so that $d(x,y)=\|x-y\|$), the last property can be made stronger using the Krull sharpening to: $\|x+y\|\leq \max \left\{\|x\|,\|y\|\right\}$ with equality if $\|x\|\neq \|y\|$. We want to prove that if $\|x+y\|\leq \max \left\{\|x\|,\|y\|\right\}$, then the equality occurs if $\|x\|\neq \|y\|$. Without loss of generality, let us assume that $\|x\|>\|y\|$. This implies that $\|x+y\|\leq \|x\|$. But we can also compute $\|x\|=\|(x+y)-y\|\leq \max \left\{\|x+y\|,\|y\|\right\}$. Now, the value of $\max \left\{\|x+y\|,\|y\|\right\}$ cannot be $\|y\|$, for if that is the case, we have $\|x\|\leq \|y\|$ contrary to the initial assumption. Thus, $\max \left\{\|x+y\|,\|y\|\right\}=\|x+y\|$, and $\|x\|\leq \|x+y\|$. Using the initial inequality, we have $\|x\|\leq \|x+y\|\leq \|x\|$ and therefore $\|x+y\|=\|x\|$. Properties From the above definition, one can conclude several typical properties of ultrametrics. For example, for all $x,y,z\in M$, at least one of the three equalities $d(x,y)=d(y,z)$ or $d(x,z)=d(y,z)$ or $d(x,y)=d(z,x)$ holds. That is, every triple of points in the space forms an isosceles triangle, so the whole space is an isosceles set. Defining the (open) ball of radius $r>0$ centred at $x\in M$ as $B(x;r):=\{y\in M\mid d(x,y)<r\}$, we have the following properties: • Every point inside a ball is its center, i.e. if $d(x,y)<r$ then $B(x;r)=B(y;r)$. • Intersecting balls are contained in each other, i.e. if $B(x;r)\cap B(y;s)$ is non-empty then either $B(x;r)\subseteq B(y;s)$ or $B(y;s)\subseteq B(x;r)$. • All balls of strictly positive radius are both open and closed sets in the induced topology. That is, open balls are also closed, and closed balls (replace $<$ with $\leq $) are also open. • The set of all open balls with radius $r$ and center in a closed ball of radius $r>0$ forms a partition of the latter, and the mutual distance of two distinct open balls is (greater or) equal to $r$. Proving these statements is an instructive exercise.[2] All directly derive from the ultrametric triangle inequality. Note that, by the second statement, a ball may have several center points that have non-zero distance. The intuition behind such seemingly strange effects is that, due to the strong triangle inequality, distances in ultrametrics do not add up. Examples • The discrete metric is an ultrametric. • The p-adic numbers form a complete ultrametric space. • Consider the set of words of arbitrary length (finite or infinite), Σ*, over some alphabet Σ. Define the distance between two different words to be 2−n, where n is the first place at which the words differ. The resulting metric is an ultrametric. • The set of words with glued ends of the length n over some alphabet Σ is an ultrametric space with respect to the p-close distance. Two words x and y are p-close if any substring of p consecutive letters (p < n) appears the same number of times (which could also be zero) both in x and y.[3] • If r = (rn) is a sequence of real numbers decreasing to zero, then |x|r := lim supn→∞ |xn|rn induces an ultrametric on the space of all complex sequences for which it is finite. (Note that this is not a seminorm since it lacks homogeneity — If the rn are allowed to be zero, one should use here the rather unusual convention that 00 = 0.) • If G is an edge-weighted undirected graph, all edge weights are positive, and d(u,v) is the weight of the minimax path between u and v (that is, the largest weight of an edge, on a path chosen to minimize this largest weight), then the vertices of the graph, with distance measured by d, form an ultrametric space, and all finite ultrametric spaces may be represented in this way.[4] Applications • A contraction mapping may then be thought of as a way of approximating the final result of a computation (which can be guaranteed to exist by the Banach fixed-point theorem). Similar ideas can be found in domain theory. p-adic analysis makes heavy use of the ultrametric nature of the p-adic metric. • In condensed matter physics, the self-averaging overlap between spins in the SK Model of spin glasses exhibits an ultrametric structure, with the solution given by the full replica symmetry breaking procedure first outlined by Giorgio Parisi and coworkers.[5] Ultrametricity also appears in the theory of aperiodic solids.[6] • In taxonomy and phylogenetic tree construction, ultrametric distances are also utilized by the UPGMA and WPGMA methods.[7] These algorithms require a constant-rate assumption and produce trees in which the distances from the root to every branch tip are equal. When DNA, RNA and protein data are analyzed, the ultrametricity assumption is called the molecular clock. • Models of intermittency in three dimensional turbulence of fluids make use of so-called cascades, and in discrete models of dyadic cascades, which have an ultrametric structure.[8] • In geography and landscape ecology, ultrametric distances have been applied to measure landscape complexity and to assess the extent to which one landscape function is more important than another.[9] References 1. Narici & Beckenstein 2011, pp. 1–18. 2. "Ultrametric Triangle Inequality". Stack Exchange. 3. Osipov, Gutkin (2013), "Clustering of periodic orbits in chaotic systems", Nonlinearity, 26 (26): 177–200, Bibcode:2013Nonli..26..177G, doi:10.1088/0951-7715/26/1/177. 4. Leclerc, Bruno (1981), "Description combinatoire des ultramétriques", Centre de Mathématique Sociale. École Pratique des Hautes Études. Mathématiques et Sciences Humaines (in French) (73): 5–37, 127, MR 0623034. 5. Mezard, M; Parisi, G; and Virasoro, M: SPIN GLASS THEORY AND BEYOND, World Scientific, 1986. ISBN 978-9971-5-0116-7 6. Rammal, R.; Toulouse, G.; Virasoro, M. (1986). "Ultrametricity for physicists". Reviews of Modern Physics. 58 (3): 765–788. Bibcode:1986RvMP...58..765R. doi:10.1103/RevModPhys.58.765. Retrieved 20 June 2011. 7. Legendre, P. and Legendre, L. 1998. Numerical Ecology. Second English Edition. Developments in Environmental Modelling 20. Elsevier, Amsterdam. 8. Benzi, R.; Biferale, L.; Trovatore, E. (1997). "Ultrametric Structure of Multiscale Energy Correlations in Turbulent Models". Physical Review Letters. 79 (9): 1670–1674. arXiv:chao-dyn/9705018. Bibcode:1997PhRvL..79.1670B. doi:10.1103/PhysRevLett.79.1670. S2CID 53120932. 9. Papadimitriou, Fivos (2013). "Mathematical modelling of land use and landscape complexity with ultrametric topology". Journal of Land Use Science. 8 (2): 234–254. doi:10.1080/1747423x.2011.637136. ISSN 1747-423X. S2CID 121927387. Bibliography • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Further reading Wikimedia Commons has media related to Non-Archimedean geometry. • Kaplansky, I. (1977), Set Theory and Metric Spaces, AMS Chelsea Publishing, ISBN 978-0-8218-2694-2.
Wikipedia
Comparison of topologies In topology and related areas of mathematics, the set of all possible topologies on a given set forms a partially ordered set. This order relation can be used for comparison of the topologies. Definition A topology on a set may be defined as the collection of subsets which are considered to be "open". An alternative definition is that it is the collection of subsets which are considered "closed". These two ways of defining the topology are essentially equivalent because the complement of an open set is closed and vice versa. In the following, it doesn't matter which definition is used. Let τ1 and τ2 be two topologies on a set X such that τ1 is contained in τ2: $\tau _{1}\subseteq \tau _{2}$. That is, every element of τ1 is also an element of τ2. Then the topology τ1 is said to be a coarser (weaker or smaller) topology than τ2, and τ2 is said to be a finer (stronger or larger) topology than τ1. [nb 1] If additionally $\tau _{1}\neq \tau _{2}$ we say τ1 is strictly coarser than τ2 and τ2 is strictly finer than τ1.[1] The binary relation ⊆ defines a partial ordering relation on the set of all possible topologies on X. Examples The finest topology on X is the discrete topology; this topology makes all subsets open. The coarsest topology on X is the trivial topology; this topology only admits the empty set and the whole space as open sets. In function spaces and spaces of measures there are often a number of possible topologies. See topologies on the set of operators on a Hilbert space for some intricate relationships. All possible polar topologies on a dual pair are finer than the weak topology and coarser than the strong topology. The complex vector space Cn may be equipped with either its usual (Euclidean) topology, or its Zariski topology. In the latter, a subset V of Cn is closed if and only if it consists of all solutions to some system of polynomial equations. Since any such V also is a closed set in the ordinary sense, but not vice versa, the Zariski topology is strictly weaker than the ordinary one. Properties Let τ1 and τ2 be two topologies on a set X. Then the following statements are equivalent: • τ1 ⊆ τ2 • the identity map idX : (X, τ2) → (X, τ1) is a continuous map. • the identity map idX : (X, τ1) → (X, τ2) is a strongly/relatively open map. (The identity map idX is surjective and therefore it is strongly open if and only if it is relatively open.) Two immediate corollaries of the above equivalent statements are • A continuous map f : X → Y remains continuous if the topology on Y becomes coarser or the topology on X finer. • An open (resp. closed) map f : X → Y remains open (resp. closed) if the topology on Y becomes finer or the topology on X coarser. One can also compare topologies using neighborhood bases. Let τ1 and τ2 be two topologies on a set X and let Bi(x) be a local base for the topology τi at x ∈ X for i = 1,2. Then τ1 ⊆ τ2 if and only if for all x ∈ X, each open set U1 in B1(x) contains some open set U2 in B2(x). Intuitively, this makes sense: a finer topology should have smaller neighborhoods. Lattice of topologies The set of all topologies on a set X together with the partial ordering relation ⊆ forms a complete lattice that is also closed under arbitrary intersections.[2] That is, any collection of topologies on X have a meet (or infimum) and a join (or supremum). The meet of a collection of topologies is the intersection of those topologies. The join, however, is not generally the union of those topologies (the union of two topologies need not be a topology) but rather the topology generated by the union. Every complete lattice is also a bounded lattice, which is to say that it has a greatest and least element. In the case of topologies, the greatest element is the discrete topology and the least element is the trivial topology. The lattice of topologies on a set $X$ is a complemented lattice; that is, given a topology $\tau $ on $X$ there exists a topology $\tau '$ on $X$ such that the intersection $\tau \cap \tau '$ is the trivial topology and the topology generated by the union $\tau \cup \tau '$ is the discrete topology.[3][4] If the set $X$ has at least three elements, the lattice of topologies on $X$ is not modular,[5] and hence not distributive either. Notes 1. There are some authors, especially analysts, who use the terms weak and strong with opposite meaning (Munkres, p. 78). See also • Initial topology, the coarsest topology on a set to make a family of mappings from that set continuous • Final topology, the finest topology on a set to make a family of mappings into that set continuous References 1. Munkres, James R. (2000). Topology (2nd ed.). Saddle River, NJ: Prentice Hall. pp. 77–78. ISBN 0-13-181629-2. 2. Larson, Roland E.; Andima, Susan J. (1975). "The lattice of topologies: A survey". Rocky Mountain Journal of Mathematics. 5 (2): 177–198. doi:10.1216/RMJ-1975-5-2-177. 3. Steiner, A. K. (1966). "The lattice of topologies: Structure and complementation". Transactions of the American Mathematical Society. 122 (2): 379–398. doi:10.1090/S0002-9947-1966-0190893-2. 4. Van Rooij, A. C. M. (1968). "The Lattice of all Topologies is Complemented". Canadian Journal of Mathematics. 20: 805–807. doi:10.4153/CJM-1968-079-9. 5. Steiner 1966, Theorem 3.1.
Wikipedia
Stronger uncertainty relations Heisenberg's uncertainty relation is one of the fundamental results in quantum mechanics.[1] Later Robertson proved the uncertainty relation for two general non-commuting observables,[2] which was strengthened by Schrödinger.[3] However, the conventional uncertainty relation like the Robertson-Schrödinger relation cannot give a non-trivial bound for the product of variances of two incompatible observables because the lower bound in the uncertainty inequalities can be null and hence trivial even for observables that are incompatible on the state of the system. The Heisenberg–Robertson–Schrödinger uncertainty relation was proved at the dawn of quantum formalism and is ever-present in the teaching and research on quantum mechanics. After about 85 years of existence of the uncertainty relation this problem was solved recently by Lorenzo Maccone and Arun K. Pati. The standard uncertainty relations are expressed in terms of the product of variances of the measurement results of the observables $A$ and $B$, and the product can be null even when one of the two variances is different from zero. However, the stronger uncertainty relations due to Maccone and Pati provide different uncertainty relations, based on the sum of variances that are guaranteed to be nontrivial whenever the observables are incompatible on the state of the quantum system.[4] (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., He et al.,[5] and Ref.[6] due to Huang.) The Maccone–Pati uncertainty relations The Heisenberg–Robertson or Schrödinger uncertainty relations do not fully capture the incompatibility of observables in a given quantum state. The stronger uncertainty relations give non-trivial bounds on the sum of the variances for two incompatible observables. For two non-commuting observables $A$ and $B$ the first stronger uncertainty relation is given by $\Delta A^{2}+\Delta B^{2}\geq \pm i\langle \Psi |[A,B]|\Psi \rangle +|\langle \Psi |(A\pm iB)|{\bar {\Psi }}\rangle |^{2},$ where $\Delta A^{2}=\langle \Psi |A^{2}|\Psi \rangle -\langle \Psi |A|\Psi \rangle ^{2}$, $\Delta B^{2}=\langle \Psi |B^{2}|\Psi \rangle -\langle \Psi |B|\Psi \rangle ^{2}$, $|{\bar {\Psi }}\rangle $ is a vector that is orthogonal to the state of the system, i.e., $\langle \Psi |{\bar {\Psi }}\rangle =0$ and one should choose the sign of $\pm i\langle \Psi |[A,B]|\Psi \rangle $ so that this is a positive number. The other non-trivial stronger uncertainty relation is given by $\Delta A^{2}+\Delta B^{2}\geq {\frac {1}{2}}|\langle {\bar {\Psi }}_{A+B}|(A+B)|\Psi \rangle |^{2},$ where $|{\bar {\Psi }}_{A+B}\rangle $ is a unit vector orthogonal to $|\Psi \rangle $. The form of $|{\bar {\Psi }}_{A+B}\rangle $ implies that the right-hand side of the new uncertainty relation is nonzero unless $|\Psi \rangle $ is an eigenstate of $(A+B)$. One can prove an improved version of the Heisenberg–Robertson uncertainty relation which reads as $\Delta A\Delta B\geq {\frac {\pm {\frac {i}{2}}\langle \Psi |[A,B]|\Psi \rangle }{1-{\frac {1}{2}}|\langle \Psi |({\frac {A}{\Delta A}}\pm i{\frac {B}{\Delta B}})|{\bar {\Psi }}\rangle |^{2}}}.$ The Heisenberg–Robertson uncertainty relation follows from the above uncertainty relation. Remarks In quantum theory, one should distinguish between the uncertainty relation and the uncertainty principle. The former refers solely to the preparation of the system which induces a spread in the measurement outcomes, and does not refer to the disturbance induced by the measurement. The uncertainty principle captures the measurement disturbance by the apparatus and the impossibility of joint measurements of incompatible observables. The Maccone–Pati uncertainty relations refer to preparation uncertainty relations. These relations set strong limitations for the nonexistence of common eigenstates for incompatible observables. The Maccone–Pati uncertainty relations have been experimentally tested for qutrit systems.[7] The new uncertainty relations not only capture the incompatibility of observables but also of quantities that are physically measurable (as variances can be measured in the experiment). References 1. Heisenberg, W. (1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik (in German). Springer Science and Business Media LLC. 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/bf01397280. ISSN 1434-6001. S2CID 122763326. 2. Robertson, H. P. (1 July 1929). "The Uncertainty Principle". Physical Review. American Physical Society (APS). 34 (1): 163–164. Bibcode:1929PhRv...34..163R. doi:10.1103/physrev.34.163. ISSN 0031-899X. 3. E. Schrödinger, "Sitzungsberichte der Preussischen Akademie der Wissenschaften", Physikalisch-mathematische Klasse 14, 296 (1930) 4. Maccone, Lorenzo; Pati, Arun K. (31 December 2014). "Stronger Uncertainty Relations for All Incompatible Observables". Physical Review Letters. 113 (26): 260401. arXiv:1407.0338. Bibcode:2014PhRvL.113z0401M. doi:10.1103/physrevlett.113.260401. ISSN 0031-9007. PMID 25615288. 5. He, Qiongyi; Peng, Shi-Guo; Drummond, Peter; Reid, Margaret (10 August 2011). "Planar quantum squeezing and atom interferometry". Physical Review A. 84 (2): 022107. arXiv:1101.0448. Bibcode:2011PhRvA..84b2107H. doi:10.1103/PhysRevA.84.022107. S2CID 7885824. 6. Huang, Yichen (10 August 2012). "Variance-based uncertainty relations". Physical Review A. 86 (2): 024101. arXiv:1012.3105. Bibcode:2012PhRvA..86b4101H. doi:10.1103/PhysRevA.86.024101. S2CID 118507388. 7. Wang, Kunkun; Zhan, Xiang; Bian, Zhihao; Li, Jian; Zhang, Yongsheng; Xue, Peng (11 May 2016). "Experimental investigation of the stronger uncertainty relations for all incompatible observables". Physical Review A. 93 (5): 052108. arXiv:1604.05901. Bibcode:2016PhRvA..93e2108W. doi:10.1103/physreva.93.052108. ISSN 2469-9926. S2CID 118404774. Other sources • Research Highlight, NATURE ASIA, 19 January 2015, "Heisenberg's uncertainty relation gets stronger" [1] 1. "Heisenberg's uncertainty relation gets stronger". Nature India. 2015. doi:10.1038/nindia.2015.6.
Wikipedia
Strong NP-completeness In computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters. A problem is said to be strongly NP-complete (NP-complete in the strong sense), if it remains NP-complete even when all of its numerical parameters are bounded by a polynomial in the length of the input.[1] A problem is said to be strongly NP-hard if a strongly NP-complete problem has a polynomial reduction to it; in combinatorial optimization, particularly, the phrase "strongly NP-hard" is reserved for problems that are not known to have a polynomial reduction to another strongly NP-complete problem. Normally numerical parameters to a problem are given in positional notation, so a problem of input size n might contain parameters whose size is exponential in n. If we redefine the problem to have the parameters given in unary notation, then the parameters must be bounded by the input size. Thus strong NP-completeness or NP-hardness may also be defined as the NP-completeness or NP-hardness of this unary version of the problem. For example, bin packing is strongly NP-complete while the 0-1 Knapsack problem is only weakly NP-complete. Thus the version of bin packing where the object and bin sizes are integers bounded by a polynomial remains NP-complete, while the corresponding version of the Knapsack problem can be solved in pseudo-polynomial time by dynamic programming. From a theoretical perspective any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have a fully polynomial-time approximation scheme (or FPTAS) unless P = NP.[2] [3] However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.[4] Some strongly NP-complete problems may still be easy to solve on average, but it's more likely that difficult instances will be encountered in practice. Strong and weak NP-hardness vs. strong and weak polynomial-time algorithms Assuming P ≠ NP, the following are true for computational problems on integers:[5] • If a problem is weakly NP-hard, then it does not have a weakly polynomial time algorithm (polynomial in the number of integers and the number of bits in the largest integer), but it may have a pseudopolynomial time algorithm (polynomial in the number of integers and the magnitude of the largest integer). An example is the partition problem. Both weak NP-hardness and weak polynomial-time correspond to encoding the input agents in binary coding. • If a problem is strongly NP-hard, then it does not even have a pseudo-polynomial time algorithm. It also does not have a fully-polynomial time approximation scheme. An example is the 3-partition problem. Both strong NP-hardness and pseudo-polynomial time correspond to encoding the input agents in unary coding. References 1. Garey, M. R.; Johnson, D. S. (July 1978). "'Strong' NP-Completeness Results: Motivation, Examples, and Implications". Journal of the Association for Computing Machinery. New York, NY: ACM. 25 (3): 499–508. doi:10.1145/322077.322090. ISSN 0004-5411. MR 0478747. S2CID 18371269. 2. Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. pp. 294–295. ISBN 3-540-65367-8. MR 1851303. 3. Garey, M. R.; Johnson, D. S. (1979). Victor Klee (ed.). Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co. pp. x+338. ISBN 0-7167-1045-5. MR 0519066. 4. H. Kellerer; U. Pferschy; D. Pisinger (2004). Knapsack Problems. Springer. 5. Demaine, Erik. "Algorithmic Lower Bounds: Fun with Hardness Proofs, Lecture 2".
Wikipedia
Strongly chordal graph In the mathematical area of graph theory, an undirected graph G is strongly chordal if it is a chordal graph and every cycle of even length (≥ 6) in G has an odd chord, i.e., an edge that connects two vertices that are an odd distance (>1) apart from each other in the cycle.[1] Characterizations Strongly chordal graphs have a forbidden subgraph characterization as the graphs that do not contain an induced cycle of length greater than three or an n-sun (n ≥ 3) as an induced subgraph.[2] An n-sun is a chordal graph with 2n vertices, partitioned into two subsets U = {u1, u2,...} and W = {w1, w2,...}, such that each vertex wi in W has exactly two neighbors, ui and u(i + 1) mod n. An n-sun cannot be strongly chordal, because the cycle u1w1u2w2... has no odd chord. Strongly chordal graphs may also be characterized as the graphs having a strong perfect elimination ordering, an ordering of the vertices such that the neighbors of any vertex that come later in the ordering form a clique and such that, for each i < j < k < l, if the ith vertex in the ordering is adjacent to the kth and the lth vertices, and the jth and kth vertices are adjacent, then the jth and lth vertices must also be adjacent.[3] A graph is strongly chordal if and only if every one of its induced subgraphs has a simple vertex, a vertex whose neighbors have neighborhoods that are linearly ordered by inclusion.[4] Also, a graph is strongly chordal if and only if it is chordal and every cycle of length five or more has a 2-chord triangle, a triangle formed by two chords and an edge of the cycle.[5] A graph is strongly chordal if and only if each of its induced subgraphs is a dually chordal graph.[6] Strongly chordal graphs may also be characterized in terms of the number of complete subgraphs each edge participates in.[7] Yet another characterization is given in.[8] Recognition It is possible to determine whether a graph is strongly chordal in polynomial time, by repeatedly searching for and removing a simple vertex. If this process eliminates all vertices in the graph, the graph must be strongly chordal; otherwise, if this process finds a subgraph without any more simple vertices, the original graph cannot be strongly chordal. For a strongly chordal graph, the order in which the vertices are removed by this process is a strong perfect elimination ordering.[9] Alternative algorithms are now known that can determine whether a graph is strongly chordal and, if so, construct a strong perfect elimination ordering more efficiently, in time O(min(n2, (n + m) log n)) for a graph with n vertices and m edges.[10] Subclasses An important subclass (based on phylogeny) is the class of k-leaf powers, the graphs formed from the leaves of a tree by connecting two leaves by an edge when their distance in the tree is at most k. A leaf power is a graph that is a k-leaf power for some k. Since powers of strongly chordal graphs are strongly chordal and trees are strongly chordal, it follows that leaf powers are strongly chordal. They form a proper subclass of strongly chordal graphs, which in turn includes the cluster graphs as the 2-leaf powers.[11] Another important subclass of strongly chordal graphs are interval graphs. In [12] it is shown that interval graphs and the larger class of rooted directed path graphs are leaf powers. Algorithmic problems Since strongly chordal graphs are both chordal graphs and dually chordal graphs, various NP-complete problems such as Independent Set, Clique, Coloring, Clique Cover, Dominating Set, and Steiner Tree can be solved efficiently for strongly chordal graphs. Graph isomorphism is isomorphism-complete for strongly chordal graphs.[13] Hamiltonian Circuit remains NP-complete for strongly chordal split graphs.[14] Notes 1. Brandstädt, Le & Spinrad (1999), Definition 3.4.1, p. 43. 2. Chang (1982); Farber (1983); Brandstädt, Le & Spinrad (1999), Theorem 7.2.1, p. 112. 3. Farber (1983); Brandstädt, Le & Spinrad (1999), Theorem 5.5.1, p. 77. 4. Farber (1983); Brandstädt, Le & Spinrad (1999), Theorem 5.5.2, p. 78. 5. Dahlhaus, Manuel & Miller (1998). 6. Brandstädt et al. (1998), Corollary 3, p. 444 7. McKee (1999) 8. De Caria & McKee (2014) 9. Farber (1983). 10. Lubiw (1987); Paige & Tarjan (1987); Spinrad (1993). 11. Nishimura, Ragde & Thilikos (2002) 12. Brandstädt et al. (2010) 13. Uehara, Toda & Nagoya (2005) 14. Müller (1996) References • Brandstädt, Andreas; Dragan, Feodor; Chepoi, Victor; Voloshin, Vitaly (1998), "Dually Chordal Graphs", SIAM Journal on Discrete Mathematics, 11 (3): 437–455, doi:10.1137/s0895480193253415. • Brandstädt, Andreas; Hundt, Christian; Mancini, Federico; Wagner, Peter (2010), "Rooted directed path graphs are leaf powers", Discrete Mathematics, 310 (4): 897–910, doi:10.1016/j.disc.2009.10.006. • Brandstädt, Andreas; Le, Van Bang (2006), "Structure and linear time recognition of 3-leaf powers", Information Processing Letters, 98 (4): 133–138, doi:10.1016/j.ipl.2006.01.004. • Brandstädt, Andreas; Le, Van Bang; Sritharan, R. (2008), "Structure and linear time recognition of 4-leaf powers", ACM Transactions on Algorithms, 5: Article 11, doi:10.1145/1435375.1435386, S2CID 6114466. • Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy (1999), Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, ISBN 0-89871-432-X. • Chang, G. J. (1982), K-domination and Graph Covering Problems, Ph.D. thesis, Cornell University. • Dahlhaus, E.; Manuel, P. D.; Miller, M. (1998), "A characterization of strongly chordal graphs", Discrete Mathematics, 187 (1–3): 269–271, doi:10.1016/S0012-365X(97)00268-9. • De Caria, P.; McKee, T.A. (2014), "Maxclique and unit disk characterizations of strongly chordal graphs", Discussiones Mathematicae Graph Theory, 34 (3): 593–602, doi:10.7151/dmgt.1757. • Farber, M. (1983), "Characterizations of strongly chordal graphs", Discrete Mathematics, 43 (2–3): 173–189, doi:10.1016/0012-365X(83)90154-1. • Lubiw, A. (1987), "Doubly lexical orderings of matrices", SIAM Journal on Computing, 16 (5): 854–879, doi:10.1137/0216057. • McKee, T. A. (1999), "A new characterization of strongly chordal graphs", Discrete Mathematics, 205 (1–3): 245–247, doi:10.1016/S0012-365X(99)00107-7. • Müller, H. (1996), "Hamiltonian Circuits in Chordal Bipartite Graphs", Discrete Mathematics, 156 (1–3): 291–298, doi:10.1016/0012-365x(95)00057-4. • Nishimura, N.; Ragde, P.; Thilikos, D.M. (2002), "On graph powers for leaf-labeled trees", Journal of Algorithms, 42: 69–108, doi:10.1006/jagm.2001.1195. • Paige, R.; Tarjan, R. E. (1987), "Three partition refinement algorithms", SIAM Journal on Computing, 16 (6): 973–989, doi:10.1137/0216062, S2CID 33265037. • Rautenbach, D. (2006), "Some remarks about leaf roots", Discrete Mathematics, 306 (13): 1456–1461, doi:10.1016/j.disc.2006.03.030. • Spinrad, J. (1993), "Doubly lexical ordering of dense 0–1 matrices", Information Processing Letters, 45 (2): 229–235, doi:10.1016/0020-0190(93)90209-R. • Uehara, R.; Toda, S.; Nagoya, T. (2005), "Graph isomorphism completeness for chordal bipartite and strongly chordal graphs", Discrete Applied Mathematics, 145 (3): 479–482, doi:10.1016/j.dam.2004.06.008.
Wikipedia
Strongly compact cardinal In set theory, a branch of mathematics, a strongly compact cardinal is a certain kind of large cardinal. A cardinal κ is strongly compact if and only if every κ-complete filter can be extended to a κ-complete ultrafilter. Strongly compact cardinals were originally defined in terms of infinitary logic, where logical operators are allowed to take infinitely many operands. The logic on a regular cardinal κ is defined by requiring the number of operands for each operator to be less than κ; then κ is strongly compact if its logic satisfies an analog of the compactness property of finitary logic. Specifically, a statement which follows from some other collection of statements should also follow from some subcollection having cardinality less than κ. The property of strong compactness may be weakened by only requiring this compactness property to hold when the original collection of statements has cardinality below a certain cardinal λ; we may then refer to λ-compactness. A cardinal is weakly compact if and only if it is κ-compact; this was the original definition of that concept. Strong compactness implies measurability, and is implied by supercompactness. Given that the relevant cardinals exist, it is consistent with ZFC either that the first measurable cardinal is strongly compact, or that the first strongly compact cardinal is supercompact; these cannot both be true, however. A measurable limit of strongly compact cardinals is strongly compact, but the least such limit is not supercompact. The consistency strength of strong compactness is strictly above that of a Woodin cardinal. Some set theorists conjecture that existence of a strongly compact cardinal is equiconsistent with that of a supercompact cardinal. However, a proof is unlikely until a canonical inner model theory for supercompact cardinals is developed. Extendibility is a second-order analog of strong compactness. See also • List of large cardinal properties References • Drake, F. R. (1974). Set Theory: An Introduction to Large Cardinals (Studies in Logic and the Foundations of Mathematics ; V. 76). Elsevier Science Ltd. ISBN 0-444-10535-2.
Wikipedia
Strongly connected component In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of an arbitrary directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V + E)). Relevant topics on Graph connectivity • Connectivity • Algebraic connectivity • Cycle rank • Rank (graph theory) • SPQR tree • St-connectivity • K-connectivity certificate • Pixel connectivity • Vertex separator • Strongly connected component • Biconnected graph • Bridge Definitions A directed graph is called strongly connected if there is a path in each direction between each pair of vertices of the graph. That is, a path exists from the first vertex in the pair to the second, and another path exists from the second vertex to the first. In a directed graph G that may not itself be strongly connected, a pair of vertices u and v are said to be strongly connected to each other if there is a path in each direction between them. The binary relation of being strongly connected is an equivalence relation, and the induced subgraphs of its equivalence classes are called strongly connected components. Equivalently, a strongly connected component of a directed graph G is a subgraph that is strongly connected, and is maximal with this property: no additional edges or vertices from G can be included in the subgraph without breaking its property of being strongly connected. The collection of strongly connected components forms a partition of the set of vertices of G. A strongly connected component $C$ is called trivial when $C$ consists of a single vertex which is not connected to itself with an edge and non-trivial otherwise.[1] If each strongly connected component is contracted to a single vertex, the resulting graph is a directed acyclic graph, the condensation of G. A directed graph is acyclic if and only if it has no strongly connected subgraphs with more than one vertex, because a directed cycle is strongly connected and every non-trivial strongly connected component contains at least one directed cycle. Algorithms DFS-based linear-time algorithms Several algorithms based on depth-first search compute strongly connected components in linear time. • Kosaraju's algorithm uses two passes of depth-first search. The first, in the original graph, is used to choose the order in which the outer loop of the second depth-first search tests vertices for having been visited already and recursively explores them if not. The second depth-first search is on the transpose graph of the original graph, and each recursive exploration finds a single new strongly connected component.[2][3] It is named after S. Rao Kosaraju, who described it (but did not publish his results) in 1978; Micha Sharir later published it in 1981.[4] • Tarjan's strongly connected components algorithm, published by Robert Tarjan in 1972,[5] performs a single pass of depth-first search. It maintains a stack of vertices that have been explored by the search but not yet assigned to a component, and calculates "low numbers" of each vertex (an index number of the highest ancestor reachable in one step from a descendant of the vertex) which it uses to determine when a set of vertices should be popped off the stack into a new component. • The path-based strong component algorithm uses a depth-first search, like Tarjan's algorithm, but with two stacks. One of the stacks is used to keep track of the vertices not yet assigned to components, while the other keeps track of the current path in the depth-first search tree. The first linear time version of this algorithm was published by Edsger W. Dijkstra in 1976.[6] Although Kosaraju's algorithm is conceptually simple, Tarjan's and the path-based algorithm require only one depth-first search rather than two. Reachability-based algorithms Previous linear-time algorithms are based on depth-first search which is generally considered hard to parallelize. Fleischer et al.[7] in 2000 proposed a divide-and-conquer approach based on reachability queries, and such algorithms are usually called reachability-based SCC algorithms. The idea of this approach is to pick a random pivot vertex and apply forward and backward reachability queries from this vertex. The two queries partition the vertex set into 4 subsets: vertices reached by both, either one, or none of the searches. One can show that a strongly connected component has to be contained in one of the subsets. The vertex subset reached by both searches forms a strongly connected component, and the algorithm then recurses on the other 3 subsets. The expected sequential running time of this algorithm is shown to be O(n log n), a factor of O(log n) more than the classic algorithms. The parallelism comes from: (1) the reachability queries can be parallelized more easily (e.g. by a breadth-first search (BFS), and it can be fast if the diameter of the graph is small); and (2) the independence between the subtasks in the divide-and-conquer process. This algorithm performs well on real-world graphs,[3] but does not have theoretical guarantee on the parallelism (consider if a graph has no edges, the algorithm requires O(n) levels of recursions). Blelloch et al.[8] in 2016 shows that if the reachability queries are applied in a random order, the cost bound of O(n log n) still holds. Furthermore, the queries then can be batched in a prefix-doubling manner (i.e. 1, 2, 4, 8 queries) and run simultaneously in one round. The overall span of this algorithm is log2 n reachability queries, which is probably the optimal parallelism that can be achieved using the reachability-based approach. Generating random strongly connected graphs Peter M. Maurer describes an algorithm for generating random strongly connected graphs,[9] based on a modification of an algorithm for strong connectivity augmentation, the problem of adding as few edges as possible to make a graph strongly connected. When used in conjunction with the Gilbert or Erdős-Rényi models with node relabelling, the algorithm is capable of generating any strongly connected graph on n nodes, without restriction on the kinds of structures that can be generated. Applications Algorithms for finding strongly connected components may be used to solve 2-satisfiability problems (systems of Boolean variables with constraints on the values of pairs of variables): as Aspvall, Plass & Tarjan (1979) showed, a 2-satisfiability instance is unsatisfiable if and only if there is a variable v such that v and its complement are both contained in the same strongly connected component of the implication graph of the instance.[10] Strongly connected components are also used to compute the Dulmage–Mendelsohn decomposition, a classification of the edges of a bipartite graph, according to whether or not they can be part of a perfect matching in the graph.[11] Related results A directed graph is strongly connected if and only if it has an ear decomposition, a partition of the edges into a sequence of directed paths and cycles such that the first subgraph in the sequence is a cycle, and each subsequent subgraph is either a cycle sharing one vertex with previous subgraphs, or a path sharing its two endpoints with previous subgraphs. According to Robbins' theorem, an undirected graph may be oriented in such a way that it becomes strongly connected, if and only if it is 2-edge-connected. One way to prove this result is to find an ear decomposition of the underlying undirected graph and then orient each ear consistently.[12] See also • Clique (graph theory) • Connected component (graph theory) • Modular decomposition • Weak component References 1. https://www.fi.muni.cz/reports/files/2010/FIMU-RS-2010-10.pdf 2. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 22.5, pp. 552–557. 3. Hong, Sungpack; Rodia, Nicole C.; Olukotun, Kunle (2013), "On fast parallel detection of strongly connected components (SCC) in small-world graphs" (PDF), Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis - SC '13, pp. 1–11, doi:10.1145/2503210.2503246, ISBN 9781450323789, S2CID 2156324 4. Sharir, Micha (1981), "A strong-connectivity algorithm and its applications in data flow analysis", Computers & Mathematics with Applications, 7: 67–72, doi:10.1016/0898-1221(81)90008-0 5. Tarjan, R. E. (1972), "Depth-first search and linear graph algorithms", SIAM Journal on Computing, 1 (2): 146–160, doi:10.1137/0201010 6. Dijkstra, Edsger (1976), A Discipline of Programming, NJ: Prentice Hall, Ch. 25. 7. Fleischer, Lisa K.; Hendrickson, Bruce; Pınar, Ali (2000), "On Identifying Strongly Connected Components in Parallel" (PDF), Parallel and Distributed Processing, Lecture Notes in Computer Science, vol. 1800, pp. 505–511, doi:10.1007/3-540-45591-4_68, ISBN 978-3-540-67442-9 8. Blelloch, Guy E.; Gu, Yan; Shun, Julian; Sun, Yihan (2016), "Parallelism in Randomized Incremental Algorithms" (PDF), Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures - SPAA '16, pp. 467–478, doi:10.1145/2935764.2935766, ISBN 9781450342100. 9. Maurer, P. M. (February 2018), Generating strongly connected random graphs (PDF), Int'l Conf. Modeling, Sim. and Vis. Methods MSV'17, CSREA Press, ISBN 978-1-60132-465-8, retrieved December 27, 2019 10. Aspvall, Bengt; Plass, Michael F.; Tarjan, Robert E. (1979), "A linear-time algorithm for testing the truth of certain quantified boolean formulas", Information Processing Letters, 8 (3): 121–123, doi:10.1016/0020-0190(79)90002-4. 11. Dulmage, A. L. & Mendelsohn, N. S. (1958), "Coverings of bipartite graphs", Can. J. Math., 10: 517–534, doi:10.4153/cjm-1958-052-0, S2CID 123363425. 12. Robbins, H. E. (1939), "A theorem on graphs, with an application to a problem on traffic control", American Mathematical Monthly, 46 (5): 281–283, doi:10.2307/2303897, JSTOR 2303897. External links • Java implementation for computation of strongly connected components in the jBPT library (see StronglyConnectedComponents class). • C++ implementation of Strongly Connected Components
Wikipedia
Minimal model (set theory) In set theory, a branch of mathematics, the minimal model is the minimal standard model of ZFC. The minimal model was introduced by Shepherdson (1951, 1952, 1953) and rediscovered by Cohen (1963). The existence of a minimal model cannot be proved in ZFC, even assuming that ZFC is consistent, but follows from the existence of a standard model as follows. If there is a set W in the von Neumann universe V that is a standard model of ZF, and the ordinal κ is the set of ordinals that occur in W, then Lκ is the class of constructible sets of W. If there is a set that is a standard model of ZF, then the smallest such set is such a Lκ. This set is called the minimal model of ZFC, and also satisfies the axiom of constructibility V=L. The downward Löwenheim–Skolem theorem implies that the minimal model (if it exists as a set) is a countable set. More precisely, every element s of the minimal model can be named; in other words there is a first-order sentence φ(x) such that s is the unique element of the minimal model for which φ(s) is true. Cohen (1963) gave another construction of the minimal model as the strongly constructible sets, using a modified form of Gödel's constructible universe. Of course, any consistent theory must have a model, so even within the minimal model of set theory there are sets that are models of ZFC (assuming ZFC is consistent). However, these set models are non-standard. In particular, they do not use the normal membership relation and they are not well-founded. If there is no standard model then the minimal model cannot exist as a set. However in this case the class of all constructible sets plays the same role as the minimal model and has similar properties (though it is now a proper class rather than a countable set). The minimal model of set theory has no inner models other than itself. In particular it is not possible to use the method of inner models to prove that any given statement true in the minimal model (such as the continuum hypothesis) is not provable in ZFC. References • Cohen, Paul J. (1963), "A minimal model for set theory", Bull. Amer. Math. Soc., 69: 537–540, doi:10.1090/S0002-9904-1963-10989-1, MR 0150036 • Shepherdson, J. C. (1951), "Inner models for set theory. I" (PDF), The Journal of Symbolic Logic, Association for Symbolic Logic, 16 (3): 161–190, doi:10.2307/2266389, JSTOR 2266389, MR 0045073 • Shepherdson, J. C. (1952), "Inner models for set theory. II", The Journal of Symbolic Logic, Association for Symbolic Logic, 17 (4): 225–237, doi:10.2307/2266609, JSTOR 2266609, MR 0053885 • Shepherdson, J. C. (1953), "Inner models for set theory. III", The Journal of Symbolic Logic, Association for Symbolic Logic, 18 (2): 145–167, doi:10.2307/2268947, JSTOR 2268947, MR 0057828
Wikipedia
Strong operator topology In functional analysis, a branch of mathematics, the strong operator topology, often abbreviated SOT, is the locally convex topology on the set of bounded operators on a Hilbert space H induced by the seminorms of the form $T\mapsto \|Tx\|$, as x varies in H. Equivalently, it is the coarsest topology such that, for each fixed x in H, the evaluation map $T\mapsto Tx$ (taking values in H) is continuous in T. The equivalence of these two definitions can be seen by observing that a subbase for both topologies is given by the sets $U(T_{0},x,\epsilon )=\{T:\|Tx-T_{0}x\|<\epsilon \}$ (where T0 is any bounded operator on H, x is any vector and ε is any positive real number). In concrete terms, this means that $T_{i}\to T$ in the strong operator topology if and only if $\|T_{i}x-Tx\|\to 0$ for each x in H. The SOT is stronger than the weak operator topology and weaker than the norm topology. The SOT lacks some of the nicer properties that the weak operator topology has, but being stronger, things are sometimes easier to prove in this topology. It can be viewed as more natural, too, since it is simply the topology of pointwise convergence. The SOT topology also provides the framework for the measurable functional calculus, just as the norm topology does for the continuous functional calculus. The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the SOT are precisely those continuous in the weak operator topology (WOT). Because of this, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT. This language translates into convergence properties of Hilbert space operators. For a complex Hilbert space, it is easy to verify by the polarization identity, that Strong Operator convergence implies Weak Operator convergence. See also • Strongly continuous semigroup • Topologies on the set of operators on a Hilbert space References • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Pedersen, Gert (1989). Analysis Now. Springer. ISBN 0-387-96788-5. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Banach space topics Types of Banach spaces • Asplund • Banach • list • Banach lattice • Grothendieck • Hilbert • Inner product space • Polarization identity • (Polynomially) Reflexive • Riesz • L-semi-inner product • (B • Strictly • Uniformly) convex • Uniformly smooth • (Injective • Projective) Tensor product (of Hilbert spaces) Banach spaces are: • Barrelled • Complete • F-space • Fréchet • tame • Locally convex • Seminorms/Minkowski functionals • Mackey • Metrizable • Normed • norm • Quasinormed • Stereotype Function space Topologies • Banach–Mazur compactum • Dual • Dual space • Dual norm • Operator • Ultraweak • Weak • polar • operator • Strong • polar • operator • Ultrastrong • Uniform convergence Linear operators • Adjoint • Bilinear • form • operator • sesquilinear • (Un)Bounded • Closed • Compact • on Hilbert spaces • (Dis)Continuous • Densely defined • Fredholm • kernel • operator • Hilbert–Schmidt • Functionals • positive • Pseudo-monotone • Normal • Nuclear • Self-adjoint • Strictly singular • Trace class • Transpose • Unitary Operator theory • Banach algebras • C*-algebras • Operator space • Spectrum • C*-algebra • radius • Spectral theory • of ODEs • Spectral theorem • Polar decomposition • Singular value decomposition Theorems • Anderson–Kadec • Banach–Alaoglu • Banach–Mazur • Banach–Saks • Banach–Schauder (open mapping) • Banach–Steinhaus (Uniform boundedness) • Bessel's inequality • Cauchy–Schwarz inequality • Closed graph • Closed range • Eberlein–Šmulian • Freudenthal spectral • Gelfand–Mazur • Gelfand–Naimark • Goldstine • Hahn–Banach • hyperplane separation • Kakutani fixed-point • Krein–Milman • Lomonosov's invariant subspace • Mackey–Arens • Mazur's lemma • M. Riesz extension • Parseval's identity • Riesz's lemma • Riesz representation • Robinson-Ursescu • Schauder fixed-point Analysis • Abstract Wiener space • Banach manifold • bundle • Bochner space • Convex series • Differentiation in Fréchet spaces • Derivatives • Fréchet • Gateaux • functional • holomorphic • quasi • Integrals • Bochner • Dunford • Gelfand–Pettis • regulated • Paley–Wiener • weak • Functional calculus • Borel • continuous • holomorphic • Measures • Lebesgue • Projection-valued • Vector • Weakly / Strongly measurable function Types of sets • Absolutely convex • Absorbing • Affine • Balanced/Circled • Bounded • Convex • Convex cone (subset) • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) • Linear cone (subset) • Radial • Radially convex/Star-shaped • Symmetric • Zonotope Subsets / set operations • Affine hull • (Relative) Algebraic interior (core) • Bounding points • Convex hull • Extreme point • Interior • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Examples • Absolute continuity AC • $ba(\Sigma )$ • c space • Banach coordinate BK • Besov $B_{p,q}^{s}(\mathbb {R} )$ • Birnbaum–Orlicz • Bounded variation BV • Bs space • Continuous C(K) with K compact Hausdorff • Hardy Hp • Hilbert H • Morrey–Campanato $L^{\lambda ,p}(\Omega )$ • ℓp • $\ell ^{\infty }$ • Lp • $L^{\infty }$ • weighted • Schwartz $S\left(\mathbb {R} ^{n}\right)$ • Segal–Bargmann F • Sequence space • Sobolev Wk,p • Sobolev inequality • Triebel–Lizorkin • Wiener amalgam $W(X,L^{p})$ Applications • Differential operator • Finite element method • Mathematical formulation of quantum mechanics • Ordinary Differential Equations (ODEs) • Validated numerics Hilbert spaces Basic concepts • Adjoint • Inner product and L-semi-inner product • Hilbert space and Prehilbert space • Orthogonal complement • Orthonormal basis Main results • Bessel's inequality • Cauchy–Schwarz inequality • Riesz representation Other results • Hilbert projection theorem • Parseval's identity • Polarization identity (Parallelogram law) Maps • Compact operator on Hilbert space • Densely defined • Hermitian form • Hilbert–Schmidt • Normal • Self-adjoint • Sesquilinear form • Trace class • Unitary Examples • Cn(K) with K compact & n<∞ • Segal–Bargmann F Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Duality and spaces of linear maps Basic concepts • Dual space • Dual system • Dual topology • Duality • Operator topologies • Polar set • Polar topology • Topologies on spaces of linear maps Topologies • Norm topology • Dual norm • Ultraweak/Weak-* • Weak • polar • operator • in Hilbert spaces • Mackey • Strong dual • polar topology • operator • Ultrastrong Main results • Banach–Alaoglu • Mackey–Arens Maps • Transpose of a linear map Subsets • Saturated family • Total set Other concepts • Biorthogonal system
Wikipedia
Strongly embedded subgroup In finite group theory, an area of abstract algebra, a strongly embedded subgroup of a finite group G is a proper subgroup H of even order such that H ∩ Hg has odd order whenever g is not in H. The Bender–Suzuki theorem, proved by Bender (1971) extending work of Suzuki (1962, 1964), classifies the groups G with a strongly embedded subgroup H. It states that either 1. G has cyclic or generalized quaternion Sylow 2-subgroups and H contains the centralizer of an involution 2. or G/O(G) has a normal subgroup of odd index isomorphic to one of the simple groups PSL2(q), Sz(q) or PSU3(q) where q≥4 is a power of 2 and H is O(G)NG(S) for some Sylow 2-subgroup S. Peterfalvi (2000, part II) revised Suzuki's part of the proof. Aschbacher (1974) extended Bender's classification to groups with a proper 2-generated core. References • Aschbacher, Michael (1974), "Finite groups with a proper 2-generated core", Transactions of the American Mathematical Society, 197: 87–112, doi:10.2307/1996929, ISSN 0002-9947, JSTOR 1996929, MR 0364427 • Bender, Helmut (1971), "Transitive Gruppen gerader Ordnung, in denen jede Involution genau einen Punkt festläβt", Journal of Algebra, 17: 527–554, doi:10.1016/0021-8693(71)90008-1, ISSN 0021-8693, MR 0288172 • Peterfalvi, Thomas (2000), Character theory for the odd order theorem, London Mathematical Society Lecture Note Series, vol. 272, Cambridge University Press, ISBN 978-0-521-64660-4, MR 1747393 • Suzuki, Michio (1962), "On a class of doubly transitive groups", Annals of Mathematics, Second Series, 75: 105–145, doi:10.2307/1970423, hdl:2027/mdp.39015095249804, ISSN 0003-486X, JSTOR 1970423, MR 0136646 • Suzuki, Michio (1964), "On a class of doubly transitive groups. II", Annals of Mathematics, Second Series, 79: 514–589, doi:10.2307/1970408, ISSN 0003-486X, JSTOR 1970408, MR 0162840
Wikipedia
Exposed point In mathematics, an exposed point of a convex set $C$ is a point $x\in C$ at which some continuous linear functional attains its strict maximum over $C$. Such a functional is then said to expose $x$. There can be many exposing functionals for $x$. The set of exposed points of $C$ is usually denoted $\exp(C)$. A stronger notion is that of strongly exposed point of $C$ which is an exposed point $x\in C$ such that some exposing functional $f$ of $x$ attains its strong maximum over $C$ at $x$, i.e. for each sequence $(x_{n})\subset C$ we have the following implication: $f(x_{n})\to \max f(C)\Longrightarrow \|x_{n}-x\|\to 0$. The set of all strongly exposed points of $C$ is usually denoted $\operatorname {str} \exp(C)$. There are two weaker notions, that of extreme point and that of support point of $C$.
Wikipedia
Stone's method In numerical analysis, Stone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU decomposition, to get an iterative solution of the problem. The method is named after Harold S. Stone, who proposed it in 1968. The LU decomposition is an excellent general-purpose linear equation solver. The biggest disadvantage is that it fails to take advantage of coefficient matrix to be a sparse matrix. The LU decomposition of a sparse matrix is usually not sparse, thus, for a large system of equations, LU decomposition may require a prohibitive amount of memory and number of arithmetical operations. In the preconditioned iterative methods, if the preconditioner matrix M is a good approximation of coefficient matrix A then the convergence is faster. This brings one to idea of using approximate factorization LU of A as the iteration matrix M. A version of incomplete lower-upper decomposition method was proposed by Stone in 1968. This method is designed for equation system arising from discretisation of partial differential equations and was firstly used for a pentadiagonal system of equations obtained while solving an elliptic partial differential equation in a two-dimensional space by a finite difference method. The LU approximate decomposition was looked in the same pentadiagonal form as the original matrix (three diagonals for L and three diagonals for U) as the best match of the seven possible equations for the five unknowns for each row of the matrix. Algorithm method stone is For the linear system Ax = b calculate incomplete LU factorization of matrix A Ax = (M-N)x = (LU-N)x = b Mx(k+1) = Nx(k)+b , with ||M|| >> ||N|| Mx(k+1) = LUx(k+1) = c(k) LUx(k) = L(Ux(k+1)) = Ly(k) = c(k) set a guess k = 0, x(k) r(k)=b - Ax(k) while ( ||r(k)||2 ≥ ε ) do evaluate new right hand side c(k) = Nx(k) + b solve Ly(k) = c(k) by forward substitution y(k) = L−1c(k) solve Ux(k+1) = y(k) by back substitution x(k+1) = U−1y(k) end while Footnotes References • Stone, H. L. (1968). "Iterative Solution of Implicit Approximations of Multidimensional Partial Differential Equations". SIAM Journal on Numerical Analysis. 5 (3): 530–538. Bibcode:1968SJNA....5..530S. doi:10.1137/0705044. hdl:10338.dmlcz/104038. - the original article • Ferziger, J.H. and Peric, M. (2001). Computational Methods for Fluid Dynamics. Springer-Verlag, Berlin. ISBN 3-540-42074-6.{{cite book}}: CS1 maint: multiple names: authors list (link) • Acosta, J.M. (2001). Numerical Algorithms for Three Dimensional Computational Fluid Dynamic Problems. PhD Thesis. Polytechnic University of Catalonia. • This article incorporates text from the article Stone's_method on CFD-Wiki that is under the GFDL license. Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software
Wikipedia
Strongly measurable function Strong measurability has a number of different meanings, some of which are explained below. Values in Banach spaces For a function f with values in a Banach space (or Fréchet space), strong measurability usually means Bochner measurability. However, if the values of f lie in the space ${\mathcal {L}}(X,Y)$ of continuous linear operators from X to Y, then often strong measurability means that the operator f(x) is Bochner measurable for each fixed x in the domain of f, whereas the Bochner measurability of f is called uniform measurability (cf. "uniformly continuous" vs. "strongly continuous"). Semigroups A semigroup of linear operators can be strongly measurable yet not strongly continuous.[1] It is uniformly measurable if and only if it is uniformly continuous, i.e., if and only if its generator is bounded. References 1. Example 6.1.10 in Linear Operators and Their Spectra, Cambridge University Press (2007) by E.B.Davies Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Analysis in topological vector spaces Basic concepts • Abstract Wiener space • Classical Wiener space • Bochner space • Convex series • Cylinder set measure • Infinite-dimensional vector function • Matrix calculus • Vector calculus Derivatives • Differentiable vector–valued functions from Euclidean space • Differentiation in Fréchet spaces • Fréchet derivative • Total • Functional derivative • Gateaux derivative • Directional • Generalizations of the derivative • Hadamard derivative • Holomorphic • Quasi-derivative Measurability • Besov measure • Cylinder set measure • Canonical Gaussian • Classical Wiener measure • Measure like set functions • infinite-dimensional Gaussian measure • Projection-valued • Vector • Bochner / Weakly / Strongly measurable function • Radonifying function Integrals • Bochner • Direct integral • Dunford • Gelfand–Pettis/Weak • Regulated • Paley–Wiener Results • Cameron–Martin theorem • Inverse function theorem • Nash–Moser theorem • Feldman–Hájek theorem • No infinite-dimensional Lebesgue measure • Sazonov's theorem • Structure theorem for Gaussian measures Related • Crinkled arc • Covariance operator Functional calculus • Borel functional calculus • Continuous functional calculus • Holomorphic functional calculus Applications • Banach manifold (bundle) • Convenient vector space • Choquet theory • Fréchet manifold • Hilbert manifold
Wikipedia
Strongly minimal theory In model theory—a branch of mathematical logic—a minimal structure is an infinite one-sorted structure such that every subset of its domain that is definable with parameters is either finite or cofinite. A strongly minimal theory is a complete theory all models of which are minimal. A strongly minimal structure is a structure whose theory is strongly minimal. Thus a structure is minimal only if the parametrically definable subsets of its domain cannot be avoided, because they are already parametrically definable in the pure language of equality. Strong minimality was one of the early notions in the new field of classification theory and stability theory that was opened up by Morley's theorem on totally categorical structures. The nontrivial standard examples of strongly minimal theories are the one-sorted theories of infinite-dimensional vector spaces, and the theories ACFp of algebraically closed fields of characteristic p. As the example ACFp shows, the parametrically definable subsets of the square of the domain of a minimal structure can be relatively complicated ("curves"). More generally, a subset of a structure that is defined as the set of realizations of a formula φ(x) is called a minimal set if every parametrically definable subset of it is either finite or cofinite. It is called a strongly minimal set if this is true even in all elementary extensions. A strongly minimal set, equipped with the closure operator given by algebraic closure in the model-theoretic sense, is an infinite matroid, or pregeometry. A model of a strongly minimal theory is determined up to isomorphism by its dimension as a matroid. Totally categorical theories are controlled by a strongly minimal set; this fact explains (and is used in the proof of) Morley's theorem. Boris Zilber conjectured that the only pregeometries that can arise from strongly minimal sets are those that arise in vector spaces, projective spaces, or algebraically closed fields. This conjecture was refuted by Ehud Hrushovski, who developed a method known as "Hrushovski construction" to build new strongly minimal structures from finite structures. See also • C-minimal theory • o-minimal theory References • Baldwin, John T.; Lachlan, Alistair H. (1971), "On Strongly Minimal Sets", The Journal of Symbolic Logic, The Journal of Symbolic Logic, Vol. 36, No. 1, 36 (1): 79–96, doi:10.2307/2271517, JSTOR 2271517 • Hrushovski, Ehud (1993), "A new strongly minimal set", Annals of Pure and Applied Logic, 62 (2): 147, doi:10.1016/0168-0072(93)90171-9 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Strongly monotone operator In functional analysis, a set-valued mapping $A:X\to 2^{X}$ where X is a real Hilbert space is said to be strongly monotone if $\exists \,c>0{\mbox{ s.t. }}\langle u-v,x-y\rangle \geq c\|x-y\|^{2}\quad \forall x,y\in X,u\in Ax,v\in Ay.$ This is analogous to the notion of strictly increasing for scalar-valued functions of one scalar argument. See also • Monotonic function References • Zeidler. Applied Functional Analysis (AMS 108) p. 173 • Bauschke, Heinz H.; Combettes, Patrick L. (28 February 2017). Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer Science & Business Media. ISBN 978-3-319-48311-5. OCLC 1037059594.
Wikipedia
Strongly positive bilinear form A bilinear form, a(•,•) whose arguments are elements of normed vector space V is a strongly positive bilinear form if and only if there exists a constant, c>0, such that $a(u,u)\geq c\cdot \|u\|^{2}$ for all $u\in V$ where $\|\cdot \|$ is the norm on V. References • AMS 108 p.120
Wikipedia
Strongly regular graph In graph theory, a strongly regular graph (SRG) is defined as follows. Let G = (V, E) be a regular graph with v vertices and degree k. G is said to be strongly regular if there are also integers λ and μ such that: • Every two adjacent vertices have λ common neighbours. • Every two non-adjacent vertices have μ common neighbours. Graph families defined by their automorphisms distance-transitive → distance-regular ← strongly regular ↓ symmetric (arc-transitive) ← t-transitive, t ≥ 2 skew-symmetric ↓ (if connected) vertex- and edge-transitive → edge-transitive and regular → edge-transitive ↓ ↓ ↓ vertex-transitive → regular → (if bipartite) biregular ↑ Cayley graph ← zero-symmetric asymmetric The complement of an srg(v, k, λ, μ) is also strongly regular. It is a srg(v, v − k − 1, v − 2 − 2k + μ, v − 2k + λ). A strongly regular graph is a distance-regular graph with diameter 2 whenever μ is non-zero. It is a locally linear graph whenever λ = 1. Etymology A strongly regular graph is denoted an srg(v, k, λ, μ) in the literature. By convention, graphs which satisfy the definition trivially are excluded from detailed studies and lists of strongly regular graphs. These include the disjoint union of one or more equal-sized complete graphs,[1][2] and their complements, the complete multipartite graphs with equal-sized independent sets. Andries Brouwer and Hendrik van Maldeghem (see #References) use an alternate but fully equivalent definition of a strongly regular graph based on spectral graph theory: a strongly regular graph is a finite regular graph that has exactly three eigenvalues, only one of which is equal to the degree k, of multiplicity 1. This automatically rules out fully connected graphs (which have only two distinct eigenvalues, not three) and disconnected graphs (whose multiplicity of the degree k is equal to the number of different connected components, which would therefore exceed one). Much of the literature, including Brouwer, refer to the larger eigenvalue as r (with multiplicty f) and the smaller one as s (with multiplicity g). History Strongly regular graphs were introduced by R.C. Bose in 1963.[3] They built upon earlier work in the 1950s in the then-new field of spectral graph theory. Examples • The cycle of length 5 is an srg(5, 2, 0, 1). • The Petersen graph is an srg(10, 3, 0, 1). • The Clebsch graph is an srg(16, 5, 0, 2). • The Shrikhande graph is an srg(16, 6, 2, 2) which is not a distance-transitive graph. • The n × n square rook's graph, i.e., the line graph of a balanced complete bipartite graph Kn,n, is an srg(n2, 2n − 2, n − 2, 2). The parameters for n = 4 coincide with those of the Shrikhande graph, but the two graphs are not isomorphic. • The line graph of a complete graph Kn is an $ \operatorname {srg} \left({\binom {n}{2}},2(n-2),n-2,4\right)$. • The Chang graphs are srg(28, 12, 6, 4), the same as the line graph of K8, but these four graphs are not isomorphic. • Every generalized quadrangle of order (s, t) gives an srg((s + 1)(st + 1), s(t + 1), s − 1, t + 1) as its line graph. For example, GQ(2, 4) gives srg(27, 10, 1, 5) as its line graph. • The Schläfli graph is an srg(27, 16, 10, 8).[4] • The Hoffman–Singleton graph is an srg(50, 7, 0, 1). • The Sims-Gewirtz graph is an (56, 10, 0, 2). • The M22 graph aka the Mesner graph is an srg(77, 16, 0, 4). • The Brouwer–Haemers graph is an srg(81, 20, 1, 6). • The Higman–Sims graph is an srg(100, 22, 0, 6). • The Local McLaughlin graph is an srg(162, 56, 10, 24). • The Cameron graph is an srg(231, 30, 9, 3). • The Berlekamp–van Lint–Seidel graph is an srg(243, 22, 1, 2). • The McLaughlin graph is an srg(275, 112, 30, 56). • The Paley graph of order q is an srg(q, (q − 1)/2, (q − 5)/4, (q − 1)/4). The smallest Paley graph, with q = 5, is the 5-cycle (above). • self-complementary arc-transitive graphs are strongly regular. A strongly regular graph is called primitive if both the graph and its complement are connected. All the above graphs are primitive, as otherwise μ = 0 or λ = k. Conway's 99-graph problem asks for the construction of an srg(99, 14, 1, 2). It is unknown whether a graph with these parameters exists, and John Horton Conway offered a $1000 prize for the solution to this problem.[5] Triangle-free graphs The strongly regular graphs with λ = 0 are triangle free. Apart from the complete graphs on fewer than 3 vertices and all complete bipartite graphs, the seven listed earlier (pentagon, Petersen, Clebsch, Hoffman-Singleton, Gewirtz, Mesner-M22, and Higman-Sims) are the only known ones. Geodetic graphs Every strongly regular graph with $\mu =1$ is a geodetic graph, a graph in which every two vertices have a unique unweighted shortest path.[6] The only known strongly regular graphs with $\mu =1$ are those where $\lambda $ is 0, therefore triangle-free as well. These are called the Moore graphs and are explored below in more detail. Other combinations of parameters such as (400, 21, 2, 1) have not yet been ruled out. Despite ongoing research on the properties that a strongly regular graph with $\mu =1$ would have,[7][8] it is not known whether any more exist or even whether their number is finite.[6] Only the elementary result is known, that $\lambda $ cannot be 1 for such a graph. Algebraic properties of strongly regular graphs Basic relationship between parameters The four parameters in an srg(v, k, λ, μ) are not independent. They must obey the following relation: $(v-k-1)\mu =k(k-\lambda -1)$ The above relation is derived through a counting argument as follows: 1. Imagine the vertices of the graph to lie in three levels. Pick any vertex as the root, in Level 0. Then its k neighbors lie in Level 1, and all other vertices lie in Level 2. 2. Vertices in Level 1 are directly connected to the root, hence they must have λ other neighbors in common with the root, and these common neighbors must also be in Level 1. Since each vertex has degree k, there are $k-\lambda -1$ edges remaining for each Level 1 node to connect to vertices in Level 2. Therefore, there are $k(k-\lambda -1)$ edges between Level 1 and Level 2. 3. Vertices in Level 2 are not directly connected to the root, hence they must have μ common neighbors with the root, and these common neighbors must all be in Level 1. There are $(v-k-1)$ vertices in Level 2, and each is connected to μ vertices in Level 1. Therefore the number of edges between Level 1 and Level 2 is $(v-k-1)\mu $. 4. Equating the two expressions for the edges between Level 1 and Level 2, the relation follows. Adjacency matrix equations Let I denote the identity matrix and let J denote the matrix of ones, both matrices of order v. The adjacency matrix A of a strongly regular graph satisfies two equations. First: $AJ=JA=kJ,$ which is a restatement of the regularity requirement. This shows that k is an eigenvalue of the adjacency matrix with the all-ones eigenvector. Second: $A^{2}=kI+\lambda {A}+\mu (J-I-A)$ which expresses strong regularity. The ij-th element of the left hand side gives the number of two-step paths from i to j. The first term of the right hand side gives the number of two-step paths from i back to i, namely k edges out and back in. The second term gives the number of two-step paths when i and j are directly connected. The third term gives the corresponding value when i and j are not connected. Since the three cases are mutually exclusive and collectively exhaustive, the simple additive equality follows. Conversely, a graph whose adjacency matrix satisfies both of the above conditions and which is not a complete or null graph is a strongly regular graph.[9] Eigenvalues and graph spectrum Since the adjacency matrix A is symmetric, it follows that its eigenvectors are orthogonal. We already observed one eigenvector above which is made of all ones, corresponding to the eigenvalue k. Therefore the other eigenvectors x must all satisfy $Jx=0$ where J is the all-ones matrix as before. Take the previously established equation: $A^{2}=kI+\lambda {A}+\mu (J-I-A)$ and multiply the above equation by eigenvector x: $A^{2}x=kIx+\lambda {A}x+\mu (J-I-A)x$ Call the corresponding eigenvalue p (not to be confused with $\lambda $ the graph parameter) and substitute $Ax=px$, $Jx=0$ and $Ix=x$: $p^{2}x=kx+\lambda px-\mu x-\mu px$ Eliminate x and rearrange to get a quadratic: $p^{2}+(\mu -\lambda )p-(k-\mu )=0$ This gives the two additional eigenvalues ${\frac {1}{2}}\left[(\lambda -\mu )\pm {\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]$. There are thus exactly three eigenvalues for a strongly regular matrix. Conversely, a connected regular graph with only three eigenvalues is strongly regular.[10] Following the terminology in much of the strongly regular graph literature, the larger eigenvalue is called r with multiplicity f and the smaller one is called s with multiplicity g. Since the sum of all the eigenvalues is the trace of the adjacency matrix, which is zero in this case, the respective multiplicities f and g can be calculated: • Eigenvalue k has multiplicity 1. • Eigenvalue $r={\frac {1}{2}}\left[(\lambda -\mu )+{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]$ has multiplicity $f={\frac {1}{2}}\left[(v-1)-{\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]$. • Eigenvalue $s={\frac {1}{2}}\left[(\lambda -\mu )-{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]$ has multiplicity $g={\frac {1}{2}}\left[(v-1)+{\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]$. As the multiplicities must be integers, their expressions provide further constraints on the values of v, k, μ, and λ. Strongly regular graphs for which $2k+(v-1)(\lambda -\mu )\neq 0$ have integer eigenvalues with unequal multiplicities. Strongly regular graphs for which $2k+(v-1)(\lambda -\mu )=0$ are called conference graphs because of their connection with symmetric conference matrices. Their parameters reduce to $\operatorname {srg} \left(v,{\frac {1}{2}}(v-1),{\frac {1}{4}}(v-5),{\frac {1}{4}}(v-1)\right).$ Their eigenvalues are $r={\frac {-1+{\sqrt {v}}}{2}}$ and $s={\frac {-1-{\sqrt {v}}}{2}}$, both of whose multiplicities are equal to ${\frac {v-1}{2}}$. Further, in this case, v must equal the sum of two squares, related to the Bruck–Ryser–Chowla theorem. Further properties of the eigenvalues and their multiplicities are: • $(A-rI)\times (A-sI)=\mu .J$, therefore $(k-r).(k-s)=\mu v$ • $\lambda -\mu =r+s$ • $k-\mu =-r\times s$ • $k\geq r$ • Given an srg(v, k, λ, μ) with eigenvalues r and s, its complement srg(v, v − k − 1, v − 2 − 2k + μ, v − 2k + λ) has eigenvalues -1-s and -1-r. • Alternate equations for the multiplicities are $f={\frac {(s+1)k(k-s)}{\mu (s-r)}}$ and $g={\frac {(r+1)k(k-r)}{\mu (r-s)}}$ • The frame quotient condition: $vk(v-k-1)=fg(r-s)^{2}$. As a corollary, $v=(r-s)^{2}$ if and only if ${f,g}={k,v-k-1}$ in some order. • Krein conditions: $(v-k-1)^{2}(k^{2}+r^{3})\geq (r+1)^{3}k^{2}$ and $(v-k-1)^{2}(k^{2}+s^{3})\geq (s+1)^{3}k^{2}$ • Absolute bound: $v\leq {\frac {f(f+3)}{2}}$ and $v\leq {\frac {g(g+3)}{2}}$. • Claw bound: if $r+1>{\frac {s(s+1)(\mu +1)}{2}}$, then $\mu =s^{2}$ or $\mu =s(s+1)$. If the above condition(s) are violated for any set of parameters, then there exists no strongly regular graph for those parameters. Brouwer has compiled such lists of existence or non-existence here with reasons for non-existence if any. The Hoffman–Singleton theorem As noted above, the multiplicities of the eigenvalues are given by $M_{\pm }={\frac {1}{2}}\left[(v-1)\pm {\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]$ which must be integers. In 1960, Alan Hoffman and Robert Singleton examined those expressions when applied on Moore graphs that have λ = 0 and μ = 1. Such graphs are free of triangles (otherwise λ would exceed zero) and quadrilaterals (otherwise μ would exceed 1), hence they have a girth (smallest cycle length) of 5. Substituting the values of λ and μ in the equation $(v-k-1)\mu =k(k-\lambda -1)$, it can be seen that $v=k^{2}+1$, and the eigenvalue multiplicities reduce to $M_{\pm }={\frac {1}{2}}\left[k^{2}\pm {\frac {2k-k^{2}}{\sqrt {4k-3}}}\right]$ For the multiplicities to be integers, the quantity ${\frac {2k-k^{2}}{\sqrt {4k-3}}}$ must be rational, therefore either the numerator $2k-k^{2}$ is zero or the denominator ${\sqrt {4k-3}}$ is an integer. If the numerator $2k-k^{2}$ is zero, the possibilities are: • k = 0 and v = 1 yields a trivial graph with one vertex and no edges, and • k = 2 and v = 5 yields the 5-vertex cycle graph $C_{5}$, usually drawn as a regular pentagon. If the denominator ${\sqrt {4k-3}}$ is an integer t, then $4k-3$ is a perfect square $t^{2}$, so $k={\frac {t^{2}+3}{4}}$. Substituting: ${\begin{aligned}M_{\pm }&={\frac {1}{2}}\left[\left({\frac {t^{2}+3}{4}}\right)^{2}\pm {\frac {{\frac {t^{2}+3}{2}}-\left({\frac {t^{2}+3}{4}}\right)^{2}}{t}}\right]\\32M_{\pm }&=(t^{2}+3)^{2}\pm {\frac {8(t^{2}+3)-(t^{2}+3)^{2}}{t}}\\&=t^{4}+6t^{2}+9\pm {\frac {-t^{4}+2t^{2}+15}{t}}\\&=t^{4}+6t^{2}+9\pm \left(-t^{3}+2t+{\frac {15}{t}}\right)\end{aligned}}$ Since both sides are integers, ${\frac {15}{t}}$ must be an integer, therefore t is a factor of 15, namely $t\in \{\pm 1,\pm 3,\pm 5,\pm 15\}$, therefore $k\in \{1,3,7,57\}$. In turn: • k = 1 and v = 2 yields a trivial graph of two vertices joined by an edge, • k = 3 and v = 10 yields the Petersen graph, • k = 7 and v = 50 yields the Hoffman–Singleton graph, discovered by Hoffman and Singleton in the course of this analysis, and • k = 57 and v = 3250 predicts a famous graph that has neither been discovered since 1960, nor has its existence been disproven.[11] The Hoffman-Singleton theorem states that there are no strongly regular girth-5 Moore graphs except the ones listed above. See also • Partial geometry • Seidel adjacency matrix • Two-graph Notes 1. Brouwer, Andries E; Haemers, Willem H. Spectra of Graphs. p. 101 Archived 2012-03-16 at the Wayback Machine 2. Godsil, Chris; Royle, Gordon. Algebraic Graph Theory. Springer-Verlag New York, 2001, p. 218. 3. https://projecteuclid.org/euclid.pjm/1103035734, R. C. Bose, Strongly regular graphs, partial geometries and partially balanced designs, Pacific J. Math 13 (1963) 389–419. (p. 122) 4. Weisstein, Eric W., "Schläfli graph", MathWorld 5. Conway, John H., Five $1,000 Problems (Update 2017) (PDF), Online Encyclopedia of Integer Sequences, retrieved 2019-02-12 6. Blokhuis, A.; Brouwer, A. E. (1988), "Geodetic graphs of diameter two", Geometriae Dedicata, 25 (1–3): 527–533, doi:10.1007/BF00191941, MR 0925851, S2CID 189890651 7. Deutsch, J.; Fisher, P. H. (2001), "On strongly regular graphs with $\mu =1$", European Journal of Combinatorics, 22 (3): 303–306, doi:10.1006/eujc.2000.0472, MR 1822718 8. Belousov, I. N.; Makhnev, A. A. (2006), "On strongly regular graphs with $\mu =1$ and their automorphisms", Doklady Akademii Nauk, 410 (2): 151–155, MR 2455371 9. Cameron, P.J.; van Lint, J.H. (1991), Designs, Graphs, Codes and their Links, London Mathematical Society Student Texts 22, Cambridge University Press, p. 37, ISBN 978-0-521-42385-4 10. Godsil, Chris; Royle, Gordon. Algebraic Graph Theory. Springer-Verlag, New York, 2001, Lemma 10.2.1. 11. Dalfó, C. (2019), "A survey on the missing Moore graph", Linear Algebra and Its Applications, 569: 1–14, doi:10.1016/j.laa.2018.12.035, hdl:2117/127212, MR 3901732, S2CID 126689579 References • Andries Brouwer and Hendrik van Maldeghem (2022), Strongly Regular Graphs. Cambridge: Cambridge University Press. ISBN 1316512037. ISBN 978-1316512036 • A.E. Brouwer, A.M. Cohen, and A. Neumaier (1989), Distance Regular Graphs. Berlin, New York: Springer-Verlag. ISBN 3-540-50619-5, ISBN 0-387-50619-5 • Chris Godsil and Gordon Royle (2004), Algebraic Graph Theory. New York: Springer-Verlag. ISBN 0-387-95241-1 External links • Eric W. Weisstein, Mathworld article with numerous examples. • Gordon Royle, List of larger graphs and families. • Andries E. Brouwer, Parameters of Strongly Regular Graphs. • Brendan McKay, Some collections of graphs. • Ted Spence, Strongly regular graphs on at most 64 vertices.
Wikipedia
Ribbon category In mathematics, a ribbon category, also called a tortile category, is a particular type of braided monoidal category. Definition A monoidal category ${\mathcal {C}}$ is, loosely speaking, a category equipped with a notion resembling the tensor product (of vector spaces, say). That is, for any two objects $C_{1},C_{2}\in {\mathcal {C}}$, there is an object $C_{1}\otimes C_{2}\in {\mathcal {C}}$. The assignment $C_{1},C_{2}\mapsto C_{1}\otimes C_{2}$ is supposed to be functorial and needs to require a number of further properties such as a unit object 1 and an associativity isomorphism. Such a category is called braided if there are isomorphisms $c_{C_{1},C_{2}}:C_{1}\otimes C_{2}{\stackrel {\cong }{\rightarrow }}C_{2}\otimes C_{1}.$ A braided monoidal category is called a ribbon category if the category is left rigid and has a family of twists. The former means that for each object $C$ there is another object (called the left dual), $C^{*}$, with maps $1\rightarrow C\otimes C^{*},C^{*}\otimes C\rightarrow 1$ such that the compositions $C^{*}\cong C^{*}\otimes 1\rightarrow C^{*}\otimes (C\otimes C^{*})\cong (C^{*}\otimes C)\otimes C^{*}\rightarrow 1\otimes C^{*}\cong C^{*}$ equals the identity of $C^{*}$, and similarly with $C$. The twists are maps $C\in {\mathcal {C}}$, $\theta _{C}:C\rightarrow C$ such that ${\begin{aligned}\theta _{C_{1}\otimes C_{2}}&=c_{C_{2},C_{1}}c_{C_{1},C_{2}}(\theta _{C_{1}}\otimes \theta _{C_{2}})\\\theta _{1}&=\mathrm {id} \\\theta _{C^{*}}&=(\theta _{C})^{*}.\end{aligned}}$ To be a ribbon category, the duals have to be thus compatible with the braiding and the twists. Concrete Example Consider the category $\mathbf {FdVect} (\mathbb {C} )$ of finite-dimensional vector spaces over $\mathbb {C} $. Suppose that $C$ is such a vector space, spanned by the basis vectors ${\hat {e_{1}}},{\hat {e_{2}}},\cdots ,{\hat {e_{n}}}$. We assign to $C$ the dual object $C^{\dagger }$ spanned by the basis vectors ${\hat {e}}^{1},{\hat {e}}^{2},\cdots ,{\hat {e}}^{n}$. Then let us define ${\begin{aligned}\cdot :\ C^{\dagger }\otimes C&\to 1\\{\hat {e}}^{i}\cdot {\hat {e_{j}}}&\mapsto {\begin{cases}1&i=j\\0&i\neq j\end{cases}}\end{aligned}}$ :\ C^{\dagger }\otimes C&\to 1\\{\hat {e}}^{i}\cdot {\hat {e_{j}}}&\mapsto {\begin{cases}1&i=j\\0&i\neq j\end{cases}}\end{aligned}}} and its dual ${\begin{aligned}kI_{n}:1&\to C\otimes C^{\dagger }\\k&\mapsto k\sum _{i=1}^{n}{\hat {e_{i}}}\otimes {\hat {e}}^{i}\\&={\begin{pmatrix}k&0&\cdots &0\\0&k&&\vdots \\&&\ddots &\\0&\cdots &&k\end{pmatrix}}\end{aligned}}$ (which largely amounts to assigning a given ${\hat {e_{i}}}$ the dual ${\hat {e}}^{i}$). Then indeed we find that (for example) ${\begin{aligned}{\hat {e}}^{i}&\cong {\hat {e}}^{i}\otimes 1\\&{\underset {I_{n}}{\to }}{\hat {e}}^{i}\otimes \sum _{j=1}^{n}{\hat {e_{j}}}\otimes {\hat {e}}^{j}\\&\cong \sum _{j=1}^{n}\left({\hat {e}}^{i}\otimes {\hat {e_{j}}}\right)\otimes {\hat {e}}^{j}\\&{\underset {\cdot }{\to }}\sum _{j=1}^{n}{\begin{cases}1\otimes {\hat {e}}^{j}&i=j\\0\otimes {\hat {e}}^{j}&i\neq j\end{cases}}\\&=1\otimes {\hat {e}}^{i}\cong {\hat {e}}^{i}\end{aligned}}$ and similarly for ${\hat {e_{i}}}$. Since this proof applies to any finite-dimensional vector space, we have shown that our structure over $\mathbf {FdVect} $ defines a (left) rigid monoidal category. Then, we must define braids and twists in such a way that they are compatible. In this case, this largely makes one determined given the other on the reals. For example, if we take the trivial braiding ${\begin{aligned}c_{C_{1},C_{2}}:C_{1}\otimes C_{2}&\to C_{2}\otimes C_{1}\\c_{C_{1},C_{2}}(a,b)&\mapsto (b,a)\end{aligned}}$ then $c_{C_{1},C_{2}}c_{C_{2},C_{1}}=\mathrm {id} _{C_{1}\otimes C_{2}}$, so our twist must obey $\theta _{C_{1}\otimes C_{2}}=\theta _{C_{1}}\otimes \theta _{C_{2}}$. In other words it must operate elementwise across tensor products. But any object $C\in \mathbf {FdVect} $ can be written in the form $C=\bigotimes _{i=1}^{n}1$ for some $n$, $\theta _{C}=\bigotimes _{i=1}^{n}\theta _{1}=\bigotimes _{i=1}^{n}\mathrm {id} =\mathrm {id} _{C}$, so our twists must also be trivial. On the other hand, we can introduce any nonzero multiplicative factor into the above braiding rule without breaking isomorphism (at least in $\mathbb {C} $). Let us for example take the braiding ${\begin{aligned}c_{C_{1},C_{2}}:C_{1}\otimes C_{2}&\to C_{2}\otimes C_{1}\\c_{C_{1},C_{2}}(a,b)&\mapsto i(b,a)\end{aligned}}$ Then $c_{C_{1},C_{2}}c_{C_{2},C_{1}}=-\mathrm {id} _{C_{1}\otimes C_{2}}$. Since $\theta _{1}=\mathrm {id} $, then $\theta _{1\otimes 1}=-\mathrm {id} _{1\otimes 1}$; by induction, if $C$ is $n$-dimensional, then $\theta _{C}=(-1)^{n+1}\mathrm {id} _{C}$. Other Examples • The category of projective modules over a commutative ring. In this category, the monoidal structure is the tensor product, the dual object is the dual in the sense of (linear) algebra, which is again projective. The twists in this case are the identity maps. • A more sophisticated example of a ribbon category are finite-dimensional representations of a quantum group.[1] The name ribbon category is motivated by a graphical depiction of morphisms.[2] Variant A strongly ribbon category is a ribbon category C equipped with a dagger structure such that the functor †: Cop → C coherently preserves the ribbon structure. References 1. Turaev 2020, XI. An algebraic construction of modular categories 2. Turaev 2020, p. 25 • Turaev, V.G. (2020) [1994]. Quantum Invariants of Knots and 3-Manifolds. de Gruyter. ISBN 978-3-11-088327-5. • Yetter, David N. (2001). Functorial Knot Theory. World Scientific. ISBN 978-981-281-046-5. OCLC 1149402321. • Ribbon category at the nLab
Wikipedia
Von Neumann regular ring In mathematics, a von Neumann regular ring is a ring R (associative, with 1, not necessarily commutative) such that for every element a in R there exists an x in R with a = axa. One may think of x as a "weak inverse" of the element a; in general x is not uniquely determined by a. Von Neumann regular rings are also called absolutely flat rings, because these rings are characterized by the fact that every left R-module is flat. Von Neumann regular rings were introduced by von Neumann (1936) under the name of "regular rings", in the course of his study of von Neumann algebras and continuous geometry. Von Neumann regular rings should not be confused with the unrelated regular rings and regular local rings of commutative algebra. An element a of a ring is called a von Neumann regular element if there exists an x such that a = axa.[1] An ideal ${\mathfrak {i}}$ is called a (von Neumann) regular ideal if for every element a in ${\mathfrak {i}}$ there exists an element x in ${\mathfrak {i}}$ such that a = axa.[2] Examples Every field (and every skew field) is von Neumann regular: for a ≠ 0 we can take x = a−1.[1] An integral domain is von Neumann regular if and only if it is a field. Every direct product of von Neumann regular rings is again von Neumann regular. Another important class of examples of von Neumann regular rings are the rings Mn(K) of n-by-n square matrices with entries from some field K. If r is the rank of A ∈ Mn(K), Gaussian elimination gives invertible matrices U and V such that $A=U{\begin{pmatrix}I_{r}&0\\0&0\end{pmatrix}}V$ (where Ir is the r-by-r identity matrix). If we set X = V−1U−1, then $AXA=U{\begin{pmatrix}I_{r}&0\\0&0\end{pmatrix}}{\begin{pmatrix}I_{r}&0\\0&0\end{pmatrix}}V=U{\begin{pmatrix}I_{r}&0\\0&0\end{pmatrix}}V=A.$ More generally, the n × n matrix ring over any von Neumann regular ring is again von Neumann regular.[1] If V is a vector space over a field (or skew field) K, then the endomorphism ring EndK(V) is von Neumann regular, even if V is not finite-dimensional.[3] Generalizing the above examples, suppose S is some ring and M is an S-module such that every submodule of M is a direct summand of M (such modules M are called semisimple). Then the endomorphism ring EndS(M) is von Neumann regular. In particular, every semisimple ring is von Neumann regular. Indeed, the semisimple rings are precisely the Noetherian von Neumann regular rings. The ring of affiliated operators of a finite von Neumann algebra is von Neumann regular. A Boolean ring is a ring in which every element satisfies a2 = a. Every Boolean ring is von Neumann regular. Facts The following statements are equivalent for the ring R: • R is von Neumann regular • every principal left ideal is generated by an idempotent element • every finitely generated left ideal is generated by an idempotent • every principal left ideal is a direct summand of the left R-module R • every finitely generated left ideal is a direct summand of the left R-module R • every finitely generated submodule of a projective left R-module P is a direct summand of P • every left R-module is flat: this is also known as R being absolutely flat, or R having weak dimension 0 • every short exact sequence of left R-modules is pure exact. The corresponding statements for right modules are also equivalent to R being von Neumann regular. Every von Neumann regular ring has Jacobson radical {0} and is thus semiprimitive (also called "Jacobson semi-simple"). In a commutative von Neumann regular ring, for each element x there is a unique element y such that xyx=x and yxy=y, so there is a canonical way to choose the "weak inverse" of x. The following statements are equivalent for the commutative ring R: • R is von Neumann regular. • R has Krull dimension 0 and is reduced. • Every localization of R at a maximal ideal is a field. • R is a subring of a product of fields closed under taking "weak inverses" of x ∈ R (the unique element y such that xyx=x and yxy=y). • R is a V-ring.[4] • R has the right-lifting property against the ring homomorphism $ \mathbb {Z} [t]\to \mathbb {Z} [t^{\pm }]\times \mathbb {Z} $ determined by $ t\mapsto (t,0)$, or said geometrically, every regular function $ \mathrm {Spec} (R)\to \mathbb {A} ^{1}$ factors through the morphism of schemes $\{0\}\sqcup \mathbb {G} _{m}\to \mathbb {A} ^{1}$.[5] Also, the following are equivalent: for a commutative ring A • R = A / nil(A) is von Neumann regular. • The spectrum of A is Hausdorff (in the Zariski topology). • The constructible topology and Zariski topology for Spec(A) coincide. Generalizations and specializations Special types of von Neumann regular rings include unit regular rings and strongly von Neumann regular rings and rank rings. A ring R is called unit regular if for every a in R, there is a unit u in R such that a = aua. Every semisimple ring is unit regular, and unit regular rings are directly finite rings. An ordinary von Neumann regular ring need not be directly finite. A ring R is called strongly von Neumann regular if for every a in R, there is some x in R with a = aax. The condition is left-right symmetric. Strongly von Neumann regular rings are unit regular. Every strongly von Neumann regular ring is a subdirect product of division rings. In some sense, this more closely mimics the properties of commutative von Neumann regular rings, which are subdirect products of fields. Of course for commutative rings, von Neumann regular and strongly von Neumann regular are equivalent. In general, the following are equivalent for a ring R: • R is strongly von Neumann regular • R is von Neumann regular and reduced • R is von Neumann regular and every idempotent in R is central • Every principal left ideal of R is generated by a central idempotent Generalizations of von Neumann regular rings include π-regular rings, left/right semihereditary rings, left/right nonsingular rings and semiprimitive rings. See also • Regular semigroup • Weak inverse Notes 1. Kaplansky (1972) p.110 2. Kaplansky (1972) p.112 3. Skornyakov 4. Michler, G.O.; Villamayor, O.E. (April 1973). "On rings whose simple modules are injective". Journal of Algebra. 25 (1): 185–201. doi:10.1016/0021-8693(73)90088-4. hdl:20.500.12110/paper_00218693_v25_n1_p185_Michler. 5. Burklund, Robert; Schlank, Tomer M.; Yuan, Allen (2022-07-20). "The Chromatic Nullstellensatz". p. 50. arXiv:2207.09929 [math.AT]. References • Kaplansky, Irving (1972), Fields and rings, Chicago lectures in mathematics (Second ed.), University of Chicago Press, ISBN 0-226-42451-0, Zbl 1001.16500 • L.A. Skornyakov (2001) [1994], "Regular ring (in the sense of von Neumann)", Encyclopedia of Mathematics, EMS Press Further reading • Goodearl, K. R. (1991), von Neumann regular rings (2 ed.), Malabar, FL: Robert E. Krieger Publishing Co. Inc., pp. xviii+412, ISBN 0-89464-632-X, MR 1150975, Zbl 0749.16001 • von Neumann, John (1936), "On Regular Rings", Proc. Natl. Acad. Sci. USA, 22 (12): 707–713, Bibcode:1936PNAS...22..707V, doi:10.1073/pnas.22.12.707, JFM 62.1103.03, PMC 1076849, PMID 16577757, Zbl 0015.38802 • von Neumann, John (1960), Continuous geometries, Princeton University Press, Zbl 0171.28003
Wikipedia
Epigroup In abstract algebra, an epigroup is a semigroup in which every element has a power that belongs to a subgroup. Formally, for all x in a semigroup S, there exists a positive integer n and a subgroup G of S such that xn belongs to G. Epigroups are known by wide variety of other names, including quasi-periodic semigroup, group-bound semigroup, completely π-regular semigroup, strongly π-regular semigroup (sπr[1]),[2] or just π-regular semigroup[3] (although the latter is ambiguous). More generally, in an arbitrary semigroup an element is called group-bound if it has a power that belongs to a subgroup. Epigroups have applications to ring theory. Many of their properties are studied in this context.[4] Epigroups were first studied by Douglas Munn in 1961, who called them pseudoinvertible.[5] Properties • Epigroups are a generalization of periodic semigroups,[6] thus all finite semigroups are also epigroups. • The class of epigroups also contains all completely regular semigroups and all completely 0-simple semigroups.[5] • All epigroups are also eventually regular semigroups.[7] (also known as π-regular semigroups) • A cancellative epigroup is a group.[8] • Green's relations D and J coincide for any epigroup.[9] • If S is an epigroup, any regular subsemigroup of S is also an epigroup.[1] • In an epigroup the Nambooripad order (as extended by P.R. Jones) and the natural partial order (of Mitsch) coincide.[10] Examples • The semigroup of all square matrices of a given size over a division ring is an epigroup.[5] • The multiplicative semigroup of every semisimple Artinian ring is an epigroup.[4]: 5  • Any algebraic semigroup is an epigroup. Structure By analogy with periodic semigroups, an epigroup S is partitioned in classes given by its idempotents, which act as identities for each subgroup. For each idempotent e of S, the set: $K_{e}=\{x\in S\mid \exists n>0:x^{n}\in G_{e}\}$ is called a unipotency class (whereas for periodic semigroups the usual name is torsion class.)[5] Subsemigroups of an epigroup need not be epigroups, but if they are, then they are called subepigroups. If an epigroup S has a partition in unipotent subepigroups (i.e. each containing a single idempotent), then this partition is unique, and its components are precisely the unipotency classes defined above; such an epigroup is called unipotently partionable. However, not every epigroup has this property. A simple counterexample is the Brandt semigroup with five elements B2 because the unipotency class of its zero element is not a subsemigroup. B2 is actually the quintessential epigroup that is not unipotently partionable. An epigroup is unipotently partionable if and only if it contains no subsemigroup that is an ideal extension of a unipotent epigroup by B2.[5] See also Special classes of semigroups References 1. Lex E. Renner (2005). Linear Algebraic Monoids. Springer. pp. 27–28. ISBN 978-3-540-24241-3. 2. A. V. Kelarev, Applications of epigroups to graded ring theory, Semigroup Forum, Volume 50, Number 1 (1995), 327–350 doi:10.1007/BF02573530 3. Eric Jespers; Jan Okninski (2007). Noetherian Semigroup Algebras. Springer. p. 16. ISBN 978-1-4020-5809-7. 4. Andrei V. Kelarev (2002). Ring Constructions and Applications. World Scientific. ISBN 978-981-02-4745-4. 5. Lev N. Shevrin (2002). "Epigroups". In Aleksandr Vasilʹevich Mikhalev and Günter Pilz (ed.). The Concise Handbook of Algebra. Springer. pp. 23–26. ISBN 978-0-7923-7072-7. 6. Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press. p. 4. ISBN 978-0-19-853577-5. 7. Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press. p. 50. ISBN 978-0-19-853577-5. 8. Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press. p. 12. ISBN 978-0-19-853577-5. 9. Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press. p. 28. ISBN 978-0-19-853577-5. 10. Peter M. Higgins (1992). Techniques of semigroup theory. Oxford University Press. p. 48. ISBN 978-0-19-853577-5.
Wikipedia
Strophoid In geometry, a strophoid is a curve generated from a given curve C and points A (the fixed point) and O (the pole) as follows: Let L be a variable line passing through O and intersecting C at K. Now let P1 and P2 be the two points on L whose distance from K is the same as the distance from A to K (i.e. KP1 = KP2 = AK). The locus of such points P1 and P2 is then the strophoid of C with respect to the pole O and fixed point A. Note that AP1 and AP2 are at right angles in this construction. In the special case where C is a line, A lies on C, and O is not on C, then the curve is called an oblique strophoid. If, in addition, OA is perpendicular to C then the curve is called a right strophoid, or simply strophoid by some authors. The right strophoid is also called the logocyclic curve or foliate. Equations Polar coordinates Let the curve C be given by $r=f(\theta ),$ where the origin is taken to be O. Let A be the point (a, b). If $K=(r\cos \theta ,\ r\sin \theta )$ is a point on the curve the distance from K to A is $d={\sqrt {(r\cos \theta -a)^{2}+(r\sin \theta -b)^{2}}}={\sqrt {(f(\theta )\cos \theta -a)^{2}+(f(\theta )\sin \theta -b)^{2}}}.$ The points on the line OK have polar angle θ, and the points at distance d from K on this line are distance $f(\theta )\pm d$ from the origin. Therefore, the equation of the strophoid is given by $r=f(\theta )\pm {\sqrt {(f(\theta )\cos \theta -a)^{2}+(f(\theta )\sin \theta -b)^{2}}}$ Cartesian coordinates Let C be given parametrically by (x(t), y(t)). Let A be the point (a, b) and let O be the point (p, q). Then, by a straightforward application of the polar formula, the strophoid is given parametrically by: $u(t)=p+(x(t)-p)(1\pm n(t)),\ v(t)=q+(y(t)-q)(1\pm n(t)),$ where $n(t)={\sqrt {\frac {(x(t)-a)^{2}+(y(t)-b)^{2}}{(x(t)-p)^{2}+(y(t)-q)^{2}}}}.$ An alternative polar formula The complex nature of the formulas given above limits their usefulness in specific cases. There is an alternative form which is sometimes simpler to apply. This is particularly useful when C is a sectrix of Maclaurin with poles O and A. Let O be the origin and A be the point (a, 0). Let K be a point on the curve, θ the angle between OK and the x-axis, and $\vartheta $ the angle between AK and the x-axis. Suppose $\vartheta $ can be given as a function θ, say $\vartheta =l(\theta ).$ Let ψ be the angle at K so $\psi =\vartheta -\theta .$ We can determine r in terms of l using the law of sines. Since ${r \over \sin \vartheta }={a \over \sin \psi },\ r=a{\frac {\sin \vartheta }{\sin \psi }}=a{\frac {\sin l(\theta )}{\sin(l(\theta )-\theta )}}.$ Let P1 and P2 be the points on OK that are distance AK from K, numbering so that $\psi =\angle P_{1}KA$ and $\pi -\psi =\angle AKP_{2}.$ △P1KA is isosceles with vertex angle ψ, so the remaining angles, $\angle AP_{1}K$ and $\angle KAP_{1},$ are ${\tfrac {\pi -\psi }{2}}.$ The angle between AP1 and the x-axis is then $l_{1}(\theta )=\vartheta +\angle KAP_{1}=\vartheta +(\pi -\psi )/2=\vartheta +(\pi -\vartheta +\theta )/2=(\vartheta +\theta +\pi )/2.$ By a similar argument, or simply using the fact that AP1 and AP2 are at right angles, the angle between AP2 and the x-axis is then $l_{2}(\theta )=(\vartheta +\theta )/2.$ The polar equation for the strophoid can now be derived from l1 and l2 from the formula above: ${\begin{aligned}&r_{1}=a{\frac {\sin l_{1}(\theta )}{\sin(l_{1}(\theta )-\theta )}}=a{\frac {\sin((l(\theta )+\theta +\pi )/2)}{\sin((l(\theta )+\theta +\pi )/2-\theta )}}=a{\frac {\cos((l(\theta )+\theta )/2)}{\cos((l(\theta )-\theta )/2)}}\\&r_{2}=a{\frac {\sin l_{2}(\theta )}{\sin(l_{2}(\theta )-\theta )}}=a{\frac {\sin((l(\theta )+\theta )/2)}{\sin((l(\theta )+\theta )/2-\theta )}}=a{\frac {\sin((l(\theta )+\theta )/2)}{\sin((l(\theta )-\theta )/2)}}\end{aligned}}$ C is a sectrix of Maclaurin with poles O and A when l is of the form $q\theta +\theta _{0},$ in that case l1 and l2 will have the same form so the strophoid is either another sectrix of Maclaurin or a pair of such curves. In this case there is also a simple polar equation for the polar equation if the origin is shifted to the right by a. Specific cases Oblique strophoids Let C be a line through A. Then, in the notation used above, $l(\theta )=\alpha $ where α is a constant. Then $l_{1}(\theta )=(\theta +\alpha +\pi )/2$ and $l_{2}(\theta )=(\theta +\alpha )/2.$ The polar equations of the resulting strophoid, called an oblique strphoid, with the origin at O are then $r=a{\frac {\cos((\alpha +\theta )/2)}{\cos((\alpha -\theta )/2)}}$ and $r=a{\frac {\sin((\alpha +\theta )/2)}{\sin((\alpha -\theta )/2)}}.$ It's easy to check that these equations describe the same curve. Moving the origin to A (again, see Sectrix of Maclaurin) and replacing −a with a produces $r=a{\frac {\sin(2\theta -\alpha )}{\sin(\theta -\alpha )}},$ and rotating by $\alpha $ in turn produces $r=a{\frac {\sin(2\theta +\alpha )}{\sin(\theta )}}.$ In rectangular coordinates, with a change of constant parameters, this is $y(x^{2}+y^{2})=b(x^{2}-y^{2})+2cxy.$ This is a cubic curve and, by the expression in polar coordinates it is rational. It has a crunode at (0, 0) and the line y = b is an asymptote. The right strophoid Putting $\alpha =\pi /2$ in $r=a{\frac {\sin(2\theta -\alpha )}{\sin(\theta -\alpha )}}$ gives $r=a{\frac {\cos 2\theta }{\cos \theta }}=a(2\cos \theta -\sec \theta ).$ This is called the right strophoid and corresponds to the case where C is the y-axis, A is the origin, and O is the point (a, 0). The Cartesian equation is $y^{2}=x^{2}(a-x)/(a+x).$ The curve resembles the Folium of Descartes[1] and the line x = –a is an asymptote to two branches. The curve has two more asymptotes, in the plane with complex coordinates, given by $x\pm iy=-a.$ Circles Let C be a circle through O and A, where O is the origin and A is the point (a, 0). Then, in the notation used above, $l(\theta )=\alpha +\theta $ where $\alpha $ is a constant. Then $l_{1}(\theta )=\theta +(\alpha +\pi )/2$ and $l_{2}(\theta )=\theta +\alpha /2.$ The polar equations of the resulting strophoid, called an oblique strophoid, with the origin at O are then $r=a{\frac {\cos(\theta +\alpha /2)}{\cos(\alpha /2)}}$ and $r=a{\frac {\sin(\theta +\alpha /2)}{\sin(\alpha /2)}}.$ These are the equations of the two circles which also pass through O and A and form angles of $\pi /4$ with C at these points. See also • Conchoid • Cissoid References 1. Chisholm, Hugh, ed. (1911). "Logocyclic Curve, Strophoid or Foliate" . Encyclopædia Britannica. Vol. 16 (11th ed.). Cambridge University Press. p. 919. • J. Dennis Lawrence (1972). A catalog of special plane curves. Dover Publications. pp. 51–53, 95, 100–104, 175. ISBN 0-486-60288-5. • E. H. Lockwood (1961). "Strophoids". A Book of Curves. Cambridge, England: Cambridge University Press. pp. 134–137. ISBN 0-521-05585-7. • R. C. Yates (1952). "Strophoids". A Handbook on Curves and Their Properties. Ann Arbor, MI: J. W. Edwards. pp. 217–220. • Weisstein, Eric W. "Strophoid". MathWorld. • Weisstein, Eric W. "Right Strophoid". MathWorld. • Sokolov, D.D. (2001) [1994], "Strophoid", Encyclopedia of Mathematics, EMS Press • O'Connor, John J.; Robertson, Edmund F., "Right Strophoid", MacTutor History of Mathematics Archive, University of St Andrews External links Media related to Strophoid at Wikimedia Commons Differential transforms of plane curves Unary operations • Evolute • Involute • Dual curve • Inverse curve • Parallel curve • Isoptic Unary operations defined by a point • Pedal & Contrapedal curves • Negative pedal curve • Pursuit curve • Caustic Unary operations defined by two points • Strophoid Binary operations defined by a point • Roulette • Cissoid Operations on a family of curves • Envelope
Wikipedia
Structurable algebra In abstract algebra, a structurable algebra is a certain kind of unital involutive non-associative algebra over a field. For example, all Jordan algebras are structurable algebras (with the trivial involution), as is any alternative algebra with involution, or any central simple algebra with involution. An involution here means a linear anti-homomorphism whose square is the identity.[1] Assume A is a unital non-associative algebra over a field, and $x\mapsto {\bar {x}}$ is an involution. If we define $V_{x,y}z:=(x{\bar {y}})z+(z{\bar {y}})x-(z{\bar {x}})y$, and $[x,y]=xy-yx$, then we say A is a structurable algebra if:[2] $[V_{x,y},V_{z,w}]=V_{V_{x,y}z,w}-V_{z,V_{y,x}w}.$ Structurable algebras were introduced by Allison in 1978.[3] The Kantor–Koecher–Tits construction produces a Lie algebra from any Jordan algebra, and this construction can be generalized so that a Lie algebra can be produced from an structurable algebra. Moreover, Allison proved over fields of characteristic zero that a structurable algebra is central simple if and only if the corresponding Lie algebra is central simple.[1] Another example of a structurable algebra is a 56-dimensional non-associative algebra originally studied by Brown in 1963, which can be constructed out of an Albert algebra.[4] When the base field is algebraically closed over characteristic not 2 or 3, the automorphism group of such an algebra has identity component equal to the simply connected exceptional algebraic group of type E6.[5] References 1. R.D. Schafer (1985). "On Structurable algebras". Journal of Algebra. Vol. 92. pp. 400–412. 2. Skip Garibaldi (2001). "Structurable Algebras and Groups of Type E_6 and E_7". Journal of Algebra. Vol. 236. pp. 651–691. 3. Garibaldi, p.658 4. R. B. Brown (1963). "A new type of nonassociative algebra". Vol. 50. Proc. Natl. Acad. Sci. U.S. A. pp. 947–949. JSTOR 71948. 5. Garibaldi, p.660
Wikipedia
Structural Ramsey theory In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below). Structural Ramsey theory began in the 1970s[1] with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics. History Leeb is given credit[2] for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham, Leeb and Rothschild's 1972 paper on the subject.[3] Key development of these ideas was done by Nešetřil and Rödl in their series of 1977[4] and 1983[5] papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington,[6] and further generalised by Prömel.[7] More recently, Mašulović[8][9][10] and Solecki[11][12][13] have done some pioneering work in the field. Motivation This article will use the set theory convention that each natural number $n\in \mathbb {N} $ can be considered as the set of all natural numbers less than it: i.e. $n=\{0,1,\ldots ,n-1\}$. For any set $A$, an $r$-colouring of $A$ is an assignment of one of $r$ labels to each element of $A$. This can be represented as a function $\Delta :A\to r$ mapping each element to its label in $r=\{0,1,\ldots ,r-1\}$ (which this article will use), or equivalently as a partition of $A=A_{0}\sqcup \cdots \sqcup A_{r-1}$ into $r$ pieces. Here are some of the classic results of Ramsey theory: • (Finite) Ramsey's theorem: for every $k\leq m,r\in \mathbb {N} $, there exists $n\in \mathbb {N} $ such that for every $r$-colouring $\Delta :[n]^{(k)}\to r$ :[n]^{(k)}\to r} of all the $k$-element subsets of $n=\{0,1,\ldots ,n-1\}$, there exists a subset $A\subseteq n$, with $|A|=m$, such that $[A]^{(k)}$ is $\Delta $-monochromatic. • (Finite) van der Waerden's theorem: for every $m,r\in \mathbb {N} $, there exists $n\in \mathbb {N} $ such that for every $r$-colouring $\Delta :n\to r$ of $n$, there exists a $\Delta $-monochromatic arithmetic progression $\{a,a+d,a+2d,\ldots ,a+(m-1)d\}\subseteq n$ of length $m$. • Graham–Rothschild theorem: fix a finite alphabet $L=\{a_{0},a_{1},\ldots ,a_{d-1}\}$. A $k$-parameter word of length $n$ over $L$ is an element $w\in (L\cup \{x_{0},x_{1},\ldots ,x_{k-1}\})^{n}$, such that all of the $x_{i}$ appear, and their first appearances are in increasing order. The set of all $k$-parameter words of length $n$ over $L$ is denoted by $\textstyle [L]{\binom {n}{k}}$. Given $\textstyle w\in [L]{\binom {n}{m}}$ and $\textstyle v\in [L]{\binom {m}{k}}$, we form their composition $\textstyle w\circ v\in [L]{\binom {n}{k}}$ by replacing every occurrence of $x_{i}$ in $w$ with the $i$th entry of $v$. Then, the Graham–Rothschild theorem states that for every $k\leq m,r\in \mathbb {N} $, there exists $n\in \mathbb {N} $ such that for every $r$-colouring $\textstyle \Delta :[L]{\binom {n}{k}}\to r$ :[L]{\binom {n}{k}}\to r} of all the $k$-parameter words of length $n$, there exists $\textstyle w\in [L]{\binom {n}{m}}$, such that $\textstyle w\circ [L]{\binom {m}{k}}=\{w\circ v:v\in [L]{\binom {m}{k}}\}$ (i.e. all the $k$-parameter subwords of $w$) is $\Delta $-monochromatic. • (Finite) Folkman's theorem: for every $m,r\in \mathbb {N} $, there exists $n\in \mathbb {N} $ such that for every $r$-colouring $\Delta :n\to r$ of $n$, there exists a subset $A\subseteq n$, with $|A|=m$, such that $\textstyle {\big (}\sum _{k\in A}k{\big )}<n$, and $\textstyle \operatorname {FS} (A)=\{\sum _{k\in B}k:B\in {\mathcal {P}}(A)\setminus \varnothing \}$ is $\Delta $-monochromatic. These "Ramsey-type" theorems all have a similar idea: we fix two integers $k$ and $m$, and a set of colours $r$. Then, we want to show there is some $n$ large enough, such that for every $r$-colouring of the "substructures" of size $k$ inside $n$, we can find a suitable "structure" $A$ inside $n$, of size $m$, such that all the "substructures" $B$ of $A$ with size $k$ have the same colour. What types of structures are allowed depends on the theorem in question, and this turns out to be virtually the only difference between them. This idea of a "Ramsey-type theorem" leads itself to the more precise notion of the Ramsey property (below). The Ramsey property Let $\mathbf {C} $ be a category. $\mathbf {C} $ has the Ramsey property if for every natural number $r$, and all objects $A,B$ in $\mathbf {C} $, there exists another object $D$ in $\mathbf {C} $, such that for every $r$-colouring $\Delta :\operatorname {Hom} (A,D)\to r$ :\operatorname {Hom} (A,D)\to r} , there exists a morphism $f:B\to D$ which is $\Delta $-monochromatic, i.e. the set $f\circ \operatorname {Hom} (A,B)={\big \{}f\circ g:g\in \operatorname {Hom} (A,B){\big \}}$ is $\Delta $-monochromatic.[10] Often, $\mathbf {C} $ is taken to be a class of finite ${\mathcal {L}}$-structures over some fixed language ${\mathcal {L}}$, with embeddings as morphisms. In this case, instead of colouring morphisms, one can think of colouring "copies" of $A$ in $D$, and then finding a copy of $B$ in $D$, such that all copies of $A$ in this copy of $B$ are monochromatic. This may lend itself more intuitively to the earlier idea of a "Ramsey-type theorem". There is also a notion of a dual Ramsey property; $\mathbf {C} $ has the dual Ramsey property if its dual category $\mathbf {C} ^{\mathrm {op} }$ has the Ramsey property as above. More concretely, $\mathbf {C} $ has the dual Ramsey property if for every natural number $r$, and all objects $A,B$ in $\mathbf {C} $, there exists another object $D$ in $\mathbf {C} $, such that for every $r$-colouring $\Delta :\operatorname {Hom} (D,A)\to r$ :\operatorname {Hom} (D,A)\to r} , there exists a morphism $f:D\to B$ for which $\operatorname {Hom} (B,A)\circ f$ is $\Delta $-monochromatic. Examples • Ramsey's theorem: the class of all finite chains, with order-preserving maps as morphisms, has the Ramsey property. • van der Waerden's theorem: in the category whose objects are finite ordinals, and whose morphisms are affine maps $x\mapsto a+dx$ for $a,d\in \mathbb {N} $, $d\neq 0$, the Ramsey property holds for $A=1$. • Hales–Jewett theorem: let $L$ be a finite alphabet, and for each $k\in \mathbb {N} $, let $X_{k}=\{x_{0},\ldots ,x_{k-1}\}$ be a set of $k$ variables. Let $\mathbf {GR} $ be the category whose objects are $A_{k}=L\cup X_{k}$ for each $k\in \mathbb {N} $, and whose morphisms $A_{n}\to A_{k}$, for $n\geq k$, are functions $f:X_{n}\to A_{k}$ which are rigid and surjective on $X_{k}\subseteq A_{k}=\operatorname {codom} f$. Then, $\mathbf {GR} $ has the dual Ramsey property for $A=A_{0}$ (and $B=A_{1}$, depending on the formulation). • Graham–Rothschild theorem: the category $\mathbf {GR} $ defined above has the dual Ramsey property. The Kechris–Pestov–Todorčević correspondence In 2005, Kechris, Pestov and Todorčević[14] discovered the following correspondence (hereafter called the KPT correspondence) between structural Ramsey theory, Fraïssé theory, and ideas from topological dynamics. Let $G$ be a topological group. For a topological space $X$, a $G$-flow (denoted $G\curvearrowright X$) is a continuous action of $G$ on $X$. We say that $G$ is extremely amenable if any $G$-flow $G\curvearrowright X$ on a compact space $X$ admits a fixed point $x\in X$, i.e. the stabiliser of $x$ is $G$ itself. For a Fraïssé structure $\mathbf {F} $, its automorphism group $\operatorname {Aut} (\mathbf {F} )$ can be considered a topological group, given the topology of pointwise convergence, or equivalently, the subspace topology induced on $\operatorname {Aut} (\mathbf {F} )$ by the space $\mathbf {F} ^{\mathbf {F} }=\{f:\mathbf {F} \to \mathbf {F} \}$ with the product topology. The following theorem illustrates the KPT correspondence: Theorem (KPT). For a Fraïssé structure $\mathbf {F} $, the following are equivalent: 1. The group $\operatorname {Aut} (\mathbf {F} )$ of automorphisms of $\mathbf {F} $ is extremely amenable. 2. The class $\operatorname {Age} (\mathbf {F} )$ has the Ramsey property. See also • Ramsey theory • Fraïssé's theorem • Age (model theory) References 1. Van Thé, Lionel Nguyen (2014-12-10). "A survey on structural Ramsey theory and topological dynamics with the Kechris–Pestov–Todorcevic correspondence in mind". arXiv:1412.3254 [math.CO]. 2. Larson, Jean A. (2012-01-01), "Infinite Combinatorics", in Gabbay, Dov M.; Kanamori, Akihiro; Woods, John (eds.), Handbook of the History of Logic, Sets and Extensions in the Twentieth Century, vol. 6, North-Holland, pp. 145–357, doi:10.1016/b978-0-444-51621-3.50003-7, ISBN 9780444516213, retrieved 2019-11-30 3. Graham, R. L.; Leeb, K.; Rothschild, B. L. (1972). "Ramsey's theorem for a class of categories". Advances in Mathematics. 8 (3): 417–433. doi:10.1016/0001-8708(72)90005-9. ISSN 0001-8708. 4. Nešetřil, Jaroslav; Rödl, Vojtěch (May 1977). "Partitions of finite relational and set systems". Journal of Combinatorial Theory, Series A. 22 (3): 289–312. doi:10.1016/0097-3165(77)90004-8. ISSN 0097-3165. 5. Nešetřil, Jaroslav; Rödl, Vojtěch (1983-03-01). "Ramsey classes of set systems". Journal of Combinatorial Theory, Series A. 34 (2): 183–201. doi:10.1016/0097-3165(83)90055-9. ISSN 0097-3165. 6. Abramson, Fred G.; Harrington, Leo A. (September 1978). "Models Without Indiscernibles". The Journal of Symbolic Logic. 43 (3): 572. doi:10.2307/2273534. ISSN 0022-4812. JSTOR 2273534. S2CID 1101279. 7. Prömel, Hans Jürgen (July 1985). "Induced partition properties of combinatorial cubes". Journal of Combinatorial Theory, Series A. 39 (2): 177–208. doi:10.1016/0097-3165(85)90036-6. ISSN 0097-3165. 8. Masulovic, Dragan; Scow, Lynn (2017). "Categorical equivalence and the Ramsey property for finite powers of a primal algebra". Algebra Universalis. 78 (2): 159–179. arXiv:1506.01221. doi:10.1007/s00012-017-0453-0. S2CID 125159388. 9. Masulovic, Dragan (2018). "Pre-adjunctions and the Ramsey property". European Journal of Combinatorics. 70: 268–283. arXiv:1609.06832. doi:10.1016/j.ejc.2018.01.006. S2CID 19216185. 10. Mašulović, Dragan (2020). "On Dual Ramsey Theorems for Relational Structures". Czechoslovak Mathematical Journal. 70 (2): 553–585. arXiv:1707.09544. doi:10.21136/CMJ.2020.0408-18. S2CID 125310940. 11. Solecki, Sławomir (August 2010). "A Ramsey theorem for structures with both relations and functions". Journal of Combinatorial Theory, Series A. 117 (6): 704–714. doi:10.1016/j.jcta.2009.12.004. ISSN 0097-3165. 12. Solecki, Slawomir (2011-04-20). "Abstract approach to finite Ramsey theory and a self-dual Ramsey theorem". arXiv:1104.3950 [math.CO]. 13. Solecki, Sławomir (2015-02-16). "Dual Ramsey theorem for trees". arXiv:1502.04442 [math.CO]. 14. Kechris, A. S.; Pestov, V. G.; Todorcevic, S. (February 2005). "Fraïssé Limits, Ramsey Theory, and topological dynamics of automorphism groups" (PDF). Geometric and Functional Analysis. 15 (1): 106–189. doi:10.1007/s00039-005-0503-1. ISSN 1016-443X. S2CID 6937893.
Wikipedia
Structural complexity theory In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes.[1] History The theory has emerged as a result of (still failing) attempts to resolve the first and still the most important question of this kind, the P = NP problem. Most of the research is done basing on the assumption of P not being equal to NP and on a more far-reaching conjecture that the polynomial time hierarchy of complexity classes is infinite.[1] Important results The compression theorem Main article: Compression theorem The compression theorem is an important theorem about the complexity of computable functions. The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions. Space hierarchy theorems Main article: Space hierarchy theorem The space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems. Time hierarchy theorems Main article: Time hierarchy theorem The time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time. Valiant–Vazirani theorem Main article: Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986.[2] The theorem states that if there is a polynomial time algorithm for Unambiguous-SAT, then NP=RP. The proof is based on the Mulmuley–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science. Sipser–Lautemann theorem Main article: Sipser–Lautemann theorem The Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that Bounded-error Probabilistic Polynomial (BPP) time, is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2. Savitch's theorem Main article: Savitch's theorem Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function $f\in \Omega (\log(n))$, ${\mathsf {NSPACE}}\left(f\left(n\right)\right)\subseteq {\mathsf {DSPACE}}\left(\left(f\left(n\right)\right)^{2}\right).$ Toda's theorem Main article: Toda's theorem Toda's theorem is a result that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" (1991) and was given the 1998 Gödel Prize. The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P. Immerman–Szelepcsényi theorem Main article: Immerman–Szelepcsényi theorem The Immerman–Szelepcsényi theorem was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem. Research topics Major directions of research in this area include:[1] • study of implications stemming from various unsolved problems about complexity classes • study of various types of resource-restricted reductions and the corresponding complete languages • study of consequences of various restrictions on and mechanisms of storage and access to data References 1. Juris Hartmanis, "New Developments in Structural Complexity Theory" (invited lecture), Proc. 15th International Colloquium on Automata, Languages and Programming, 1988 (ICALP 88), Lecture Notes in Computer Science, vol. 317 (1988), pp. 271-286. 2. Valiant, L.; Vazirani, V. (1986). "NP is as easy as detecting unique solutions" (PDF). Theoretical Computer Science. 47: 85–93. doi:10.1016/0304-3975(86)90135-0.
Wikipedia
Structural cut-off The structural cut-off is a concept in network science which imposes a degree cut-off in the degree distribution of a finite size network due to structural limitations (such as the simple graph property). Networks with vertices with degree higher than the structural cut-off will display structural disassortativity. Part of a series on Network science • Theory • Graph • Complex network • Contagion • Small-world • Scale-free • Community structure • Percolation • Evolution • Controllability • Graph drawing • Social capital • Link analysis • Optimization • Reciprocity • Closure • Homophily • Transitivity • Preferential attachment • Balance theory • Network effect • Social influence Network types • Informational (computing) • Telecommunication • Transport • Social • Scientific collaboration • Biological • Artificial neural • Interdependent • Semantic • Spatial • Dependency • Flow • on-Chip Graphs Features • Clique • Component • Cut • Cycle • Data structure • Edge • Loop • Neighborhood • Path • Vertex • Adjacency list / matrix • Incidence list / matrix Types • Bipartite • Complete • Directed • Hyper • Labeled • Multi • Random • Weighted • Metrics • Algorithms • Centrality • Degree • Motif • Clustering • Degree distribution • Assortativity • Distance • Modularity • Efficiency Models Topology • Random graph • Erdős–Rényi • Barabási–Albert • Bianconi–Barabási • Fitness model • Watts–Strogatz • Exponential random (ERGM) • Random geometric (RGG) • Hyperbolic (HGN) • Hierarchical • Stochastic block • Blockmodeling • Maximum entropy • Soft configuration • LFR Benchmark Dynamics • Boolean network • agent based • Epidemic/SIR • Lists • Categories • Topics • Software • Network scientists • Category:Network theory • Category:Graph theory Definition The structural cut-off is a maximum degree cut-off that arises from the structure of a finite size network. Let $E_{kk'}$ be the number of edges between all vertices of degree $k$ and $k'$ if $k\neq k'$, and twice the number if $k=k'$. Given that multiple edges between two vertices are not allowed, $E_{kk'}$ is bounded by the maximum number of edges between two degree classes $m_{kk'}$. Then, the ratio can be written $r_{kk'}\equiv {\frac {E_{kk'}}{m_{kk'}}}={\frac {\langle k\rangle P(k,k')}{\min\{kP(k),k'P(k'),NP(k)P(k')\}}}$, where $\langle k\rangle $ is the average degree of the network, $N$ is the total number of vertices, $P(k)$ is the probability a randomly chosen vertex will have degree $k$, and $P(k,k')=E_{kk'}/\langle k\rangle N$ is the probability that a randomly picked edge will connect on one side a vertex with degree $k$ with a vertex of degree $k'$. To be in the physical region, $r_{kk'}\leq 1$ must be satisfied. The structural cut-off $k_{s}$ is then defined by $r_{k_{s}k_{s}}=1$. [1] Structural cut-off for neutral networks The structural cut-off plays an important role in neutral (or uncorrelated) networks, which do not display any assortativity. The cut-off takes the form $k_{s}\sim (\langle k\rangle N)^{1/2}$ which is finite in any real network. Thus, if vertices of degree $k\geq k_{s}$ exist, it is physically impossible to attach enough edges between them to maintain the neutrality of the network. Structural disassortativity in scale-free networks In a scale-free network the degree distribution is described by a power law with characteristic exponent $\gamma $, $P(k)\sim k^{-\gamma }$. In a finite scale free network, the maximum degree of any vertex (also called the natural cut-off), scales as $k_{\text{max}}\sim N^{\frac {1}{\gamma -1}}$. Then, networks with $\gamma <3$, which is the regime of most real networks, will have $k_{\text{max}}$ diverging faster than $k_{s}\sim N^{1/2}$ in a neutral network. This has the important implication that an otherwise neutral network may show disassortative degree correlations if $k_{\text{max}}>k_{s}$. This disassortativity is not a result of any microscopic property of the network, but is purely due to the structural limitations of the network. In the analysis of networks, for a degree correlation to be meaningful, it must be checked that the correlations are not of structural origin. Impact of the structural cut-off Generated networks A network generated randomly by a network generation algorithm is in general not free of structural disassortativity. If a neutral network is required, then structural disassortativity must be avoided. There are a few methods by which this can be done: [2] 1. Allow multiple edges between the same two vertices. While this means that the network is no longer a simple network, it allows for sufficient edges to maintain neutrality. 2. Simply remove all vertices with degree $k>k_{s}$. This guarantees that no vertex is subject to structural limitations in its edges, and the network is free of structural disassortativity. Real networks In some real networks, the same methods as for generated networks can also be used. In many cases, however, it may not make sense to consider multiple edges between two vertices, or such information is not available. The high degree vertices (hubs) may also be an important part of the network that cannot be removed without changing other fundamental properties. To determine whether the assortativity or disassortativity of a network is of structural origin, the network can be compared with a degree-preserving randomized version of itself (without multiple edges). Then any assortativity measure of the randomized version will be a result of the structural cut-off. If the real network displays any additional assortativity or disassortativity beyond the structural disassortativity, then it is a meaningful property of the real network. Other quantities that depend on the degree correlations, such as some definitions of the rich-club coefficient, will also be impacted by the structural cut-off. [3] See also • Assortativity • Degree distribution • Complex network • Rich-club coefficient References 1. Boguna, M.; Pastor-Satorras, R.; Vespignani, A. (1 March 2004). "Cut-offs and finite size effects in scale-free networks". The European Physical Journal B. 38 (2): 205–209. arXiv:cond-mat/0311650. Bibcode:2004EPJB...38..205B. doi:10.1140/epjb/e2004-00038-8. 2. Catanzaro, Michele; Boguñá, Marián; Pastor-Satorras, Romualdo (February 2005). "Generation of uncorrelated random scale-free networks". Physical Review E. 71 (2). arXiv:cond-mat/0408110. Bibcode:2005PhRvE..71b7103C. doi:10.1103/PhysRevE.71.027103. 3. Zhou, S; Mondragón, R J (28 June 2007). "Structural constraints in complex networks". New Journal of Physics. 9 (6): 173–173. arXiv:physics/0702096. Bibcode:2007NJPh....9..173Z. doi:10.1088/1367-2630/9/6/173.
Wikipedia
Structural identifiability In the area of system identification, a dynamical system is structurally identifiable if it is possible to infer its unknown parameters by measuring its output over time. This problem arises in many branch of applied mathematics, since dynamical systems (such as the ones described by ordinary differential equations) are commonly utilized to model physical processes and these models contain unknown parameters that are typically estimated using experimental data.[1][2][3] However, in certain cases, the model structure may not permit a unique solution for this estimation problem, even when the data is continuous and free from noise. To avoid potential issues, it is recommended to verify the uniqueness of the solution in advance, prior to conducting any actual experiments. The lack of structural identifiability implies that there are multiple solutions for the problem of system identification, and the impossibility of distinguishing between these solutions suggests that the system has poor forecasting power as a model.[4] Examples Linear time-invariant system Source[2] Consider a linear time-invariant system with the following state-space representation: ${\begin{aligned}{\dot {x}}_{1}(t)&=-\theta _{1}x_{1},\\{\dot {x}}_{2}(t)&=\theta _{1}x_{1},\\y(t)&=\theta _{2}x_{2},\end{aligned}}$ and with initial conditions given by $x_{1}(0)=\theta _{3}$ and $x_{2}(0)=0$. The solution of the output $y$ is $y(t)=\theta _{2}\theta _{3}e^{-\theta _{1}t}\left(e^{\theta _{1}t}-1\right),$ which implies that the parameters $\theta _{2}$ and $\theta _{3}$ are not structurally identifiable. For instance, the parameters $\theta _{1}=1,\theta _{2}=1,\theta _{3}=1$ generates the same output as the parameters $\theta _{1}=1,\theta _{2}=2,\theta _{3}=0.5$. Non-linear system Source[3] A model of a possible glucose homeostasis mechanism is given by the differential equations[5] ${\begin{aligned}&{\dot {G}}=u(0)+u-(c+s_{\mathrm {i} }\,I)G,\\&{\dot {\beta }}=\beta \left({\frac {1.4583\cdot 10^{-5}}{1+\left({\frac {8.4}{G}}\right)^{1.7}}}-{\frac {1.7361\cdot 10^{-5}}{1+\left({\frac {G}{8.4}}\right)^{8.5}}}\right),\\&{\dot {I}}=p\,\beta \,{\frac {G^{2}}{\alpha ^{2}+G^{2}}}-\gamma \,I,\end{aligned}}$ where (c, si, p, α, γ) are parameters of the system, and the states are the plasma glucose concentration G, the plasma insulin concentration I, and the beta-cell functional mass β. It is possible to show that the parameters p and si are not structurally identifiable: any numerical choice of parameters p and si that have the same product psi are indisguitishable.[3] Practical identifiability Structural identifiability is assessed by analyzing the dynamical equations of the system, and does not take into account possible noises in the measurement of the output. In contrast, practical non-identifiability also takes noises into account.[1][6] Other related notions The notion of structurally identifiable is closely related to observability, which refers to the capacity of inferring the state of the system by measuring the trajectories of the system output. It is also closely related to data informativity, which refers to the proper selection of inputs that enables the inference of the unknown parameters.[7] The (lack of) structural identifiability is also important in the context of dynamical compensation of physiological control systems. These systems should ensure a precise dynamical response despite variations in certain parameters.[8][9] In other words, while in the field of systems identification, unidentifiability is considered a negative property, in the context of dynamical compensation, unidentifiability becomes a desirable property.[9] Software There exist many software that can be used for analyzing the identifiability of a system, including non-linear systems:[10] • PottersWheel: MATLAB toolbox that uses profile likelihood for structural and practical identifiability analysis. • Julia library for assessing structural parameter identifiability.[11] • STRIKE-GOLDD: MATLAB toolbox for structural identifiability analysis.[12] See also • System identification • Observability • Model order reduction • Adaptive control References 1. Miao, Hongyu; Xia, Xiaohua; Perelson, Alan S.; Wu, Hulin (2011). "On Identifiability of Nonlinear ODE Models and Applications in Viral Dynamics". SIAM Review. 53 (1): 3–39. doi:10.1137/090757009. ISSN 0036-1445. PMC 3140286. PMID 21785515. (Erratum: doi:10.1137/23M1568958) 2. Raue, A.; Karlsson, J.; Saccomani, M. P.; Jirstrand, M.; Timmer, J. (2014-05-15). "Comparison of approaches for parameter identifiability analysis of biological systems". Bioinformatics. 30 (10): 1440–1448. doi:10.1093/bioinformatics/btu006. ISSN 1367-4803. PMID 24463185. 3. Villaverde, Alejandro F. (2019-01-01). "Observability and Structural Identifiability of Nonlinear Biological Systems". Complexity. 2019: 1–12. doi:10.1155/2019/8497093. ISSN 1076-2787. 4. Fiacchini, Mirko; Alamir, Mazen (2021). "The Ockham's razor applied to COVID-19 model fitting French data". Annual Reviews in Control. 51: 500–510. doi:10.1016/j.arcontrol.2021.01.002. PMC 7846253. PMID 33551664. 5. Karin, Omer; Swisa, Avital; Glaser, Benjamin; Dor, Yuval; Alon, Uri (2016). "Dynamical compensation in physiological circuits". Molecular Systems Biology. 12 (11): 886. doi:10.15252/msb.20167216. ISSN 1744-4292. PMC 5147051. PMID 27875241. 6. Raue, A.; Kreutz, C.; Maiwald, T.; Bachmann, J.; Schilling, M.; Klingmüller, U.; Timmer, J. (2009-08-01). "Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood". Bioinformatics. 25 (15): 1923–1929. doi:10.1093/bioinformatics/btp358. ISSN 1460-2059. PMID 19505944. 7. Gevers, Michel; Bazanella, Alexandre S.; Coutinho, Daniel F.; Dasgupta, Soura (2013). "Identifiability and excitation of polynomial systems". 52nd IEEE Conference on Decision and Control. Firenze: IEEE. pp. 4278–4283. doi:10.1109/CDC.2013.6760547. ISBN 978-1-4673-5717-3. S2CID 7796419. 8. Karin, Omer; Swisa, Avital; Glaser, Benjamin; Dor, Yuval; Alon, Uri (2016). "Dynamical compensation in physiological circuits". Molecular Systems Biology. 12 (11): 886. doi:10.15252/msb.20167216. ISSN 1744-4292. PMC 5147051. PMID 27875241. 9. Sontag, Eduardo D. (2017-04-06). Komarova, Natalia L. (ed.). "Dynamic compensation, parameter identifiability, and equivariances". PLOS Computational Biology. 13 (4): e1005447. Bibcode:2017PLSCB..13E5447S. doi:10.1371/journal.pcbi.1005447. ISSN 1553-7358. PMC 5398758. PMID 28384175. 10. Barreiro, Xabier Rey; Villaverde, Alejandro F. (2023-01-31). "Benchmarking tools for a priori identifiability analysis". Bioinformatics. 39 (2): btad065. doi:10.1093/bioinformatics/btad065. ISSN 1367-4811. PMC 9913045. PMID 36721336. 11. Dong, Ruiwen; Goodbrake, Christian; Harrington, Heather A.; Pogudin, Gleb (2023-03-31). "Differential Elimination for Dynamical Models via Projections with Applications to Structural Identifiability". SIAM Journal on Applied Algebra and Geometry. 7 (1): 194–235. arXiv:2111.00991. doi:10.1137/22M1469067. ISSN 2470-6566. S2CID 245650629. 12. Díaz-Seoane, Sandra; Rey-Barreiro, Xabier; Villaverde, Alejandro F. (2022-07-15). "STRIKE-GOLDD 4.0: user-friendly, efficient analysis of structural identifiability and observability". arXiv:2207.07346 [eess.SY].
Wikipedia
Structural proof theory In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory. Analytic proof Main article: Analytic proof The notion of analytic proof was introduced into proof theory by Gerhard Gentzen for the sequent calculus; the analytic proofs are those that are cut-free. His natural deduction calculus also supports a notion of analytic proof, as was shown by Dag Prawitz; the definition is slightly more complex—the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting. Structures and connectives The term structure in structural proof theory comes from a technical notion introduced in the sequent calculus: the sequent calculus represents the judgement made at any stage of an inference using special, extra-logical operators called structural operators: in $A_{1},\dots ,A_{m}\vdash B_{1},\dots ,B_{n}$, the commas to the left of the turnstile are operators normally interpreted as conjunctions, those to the right as disjunctions, whilst the turnstile symbol itself is interpreted as an implication. However, it is important to note that there is a fundamental difference in behaviour between these operators and the logical connectives they are interpreted by in the sequent calculus: the structural operators are used in every rule of the calculus, and are not considered when asking whether the subformula property applies. Furthermore, the logical rules go one way only: logical structure is introduced by logical rules, and cannot be eliminated once created, while structural operators can be introduced and eliminated in the course of a derivation. The idea of looking at the syntactic features of sequents as special, non-logical operators is not old, and was forced by innovations in proof theory: when the structural operators are as simple as in Getzen's original sequent calculus there is little need to analyse them, but proof calculi of deep inference such as display logic (introduced by Nuel Belnap in 1982)[1] support structural operators as complex as the logical connectives, and demand sophisticated treatment. Cut-elimination in the sequent calculus Main article: Cut-elimination Natural deduction and the formulae-as-types correspondence Main article: Natural deduction Hypersequents Main article: Hypersequent The hypersequent framework extends the ordinary sequent structure to a multiset of sequents, using an additional structural connective | (called the hypersequent bar) to separate different sequents. It has been used to provide analytic calculi for, e.g., modal, intermediate and substructural logics[2][3][4] A hypersequent is a structure $\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}$ where each $\Gamma _{i}\vdash \Delta _{i}$ is an ordinary sequent, called a component of the hypersequent. As for sequents, hypersequents can be based on sets, multisets, or sequences, and the components can be single-conclusion or multi-conclusion sequents. The formula interpretation of the hypersequents depends on the logic under consideration, but is nearly always some form of disjunction. The most common interpretations are as a simple disjunction $(\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})$ for intermediate logics, or as a disjunction of boxes $\Box (\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor \Box (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})$ for modal logics. In line with the disjunctive interpretation of the hypersequent bar, essentially all hypersequent calculi include the external structural rules, in particular the external weakening rule ${\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma \vdash \Pi }}$ and the external contraction rule ${\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}}$ The additional expressivity of the hypersequent framework is provided by rules manipulating the hypersequent structure. An important example is provided by the modalised splitting rule[3] ${\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma ,\Omega \vdash \Box \Pi ,\Theta }{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma \vdash \Box \Pi \mid \Omega \vdash \Theta }}$ for modal logic S5, where $\Box \Sigma $ means that every formula in $\Box \Sigma $ is of the form $\Box A$. Another example is given by the communication rule for the intermediate logic LC[3] ${\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Omega \vdash A\qquad \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Theta \vdash B}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Omega \vdash B\mid \Theta \vdash A}}$ Note that in the communication rule the components are single-conclusion sequents. Nested sequent calculus Main article: Nested sequent calculus The nested sequent calculus is a formalisation that resembles a 2-sided calculus of structures. Notes 1. N. D. Belnap. "Display Logic." Journal of Philosophical Logic, 11(4), 375–417, 1982. 2. Minc, G.E. (1971) [Originally published in Russian in 1968]. "On some calculi of modal logic". The Calculi of Symbolic Logic. Proceedings of the Steklov Institute of Mathematics. AMS. 98: 97–124. 3. Avron, Arnon (1996). "The method of hypersequents in the proof theory of propositional non-classical logics" (PDF). Logic: From Foundations to Applications: European Logic Colloquium. Clarendon Press: 1–32. 4. Pottinger, Garrel (1983). "Uniform, cut-free formulations of T, S4, and S5". Journal of Symbolic Logic. 48 (3): 900. doi:10.2307/2273495. JSTOR 2273495. S2CID 250346853. References • Sara Negri; Jan Von Plato (2001). Structural proof theory. Cambridge University Press. ISBN 978-0-521-79307-0. • Anne Sjerp Troelstra; Helmut Schwichtenberg (2000). Basic proof theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-77911-1.
Wikipedia
Structural induction Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem), computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction. Structural induction is used to prove that some proposition P(x) holds for all x of some sort of recursively defined structure, such as formulas, lists, or trees. A well-founded partial order is defined on the structures ("subformula" for formulas, "sublist" for lists, and "subtree" for trees). The structural induction proof is a proof that the proposition holds for all the minimal structures and that if it holds for the immediate substructures of a certain structure S, then it must hold for S also. (Formally speaking, this then satisfies the premises of an axiom of well-founded induction, which asserts that these two conditions are sufficient for the proposition to hold for all x.) A structurally recursive function uses the same idea to define a recursive function: "base cases" handle each minimal structure and a rule for recursion. Structural recursion is usually proved correct by structural induction; in particularly easy cases, the inductive step is often left out. The length and ++ functions in the example below are structurally recursive. For example, if the structures are lists, one usually introduces the partial order "<", in which L < M whenever list L is the tail of list M. Under this ordering, the empty list [] is the unique minimal element. A structural induction proof of some proposition P(L) then consists of two parts: A proof that P([]) is true and a proof that if P(L) is true for some list L, and if L is the tail of list M, then P(M) must also be true. Eventually, there may exist more than one base case and/or more than one inductive case, depending on how the function or structure was constructed. In those cases, a structural induction proof of some proposition P(L) then consists of: 1. a proof that P(BC) is true for each base case BC, 2. a proof that if P(I) is true for some instance I, and M can be obtained from I by applying any one recursive rule once, then P(M) must also be true. Examples An ancestor tree is a commonly known data structure, showing the parents, grandparents, etc. of a person as far as known (see picture for an example). It is recursively defined: • in the simplest case, an ancestor tree shows just one person (if nothing is known about their parents); • alternatively, an ancestor tree shows one person and, connected by branches, the two ancestor subtrees of their parents (using for brevity of proof the simplifying assumption that if one of them is known, both are). As an example, the property "An ancestor tree extending over g generations shows at most 2g − 1 persons" can be proven by structural induction as follows: • In the simplest case, the tree shows just one person and hence one generation; the property is true for such a tree, since 1 ≤ 21 − 1. • Alternatively, the tree shows one person and their parents' trees. Since each of the latter is a substructure of the whole tree, it can be assumed to satisfy the property to be proven (a.k.a. the induction hypothesis). That is, p ≤ 2g − 1 and q ≤ 2h − 1 can be assumed, where g and h denotes the number of generations the father's and the mother's subtree extends over, respectively, and p and q denote the numbers of persons they show. • In case g ≤ h, the whole tree extends over 1 + h generations and shows p + q + 1 persons, and $p+q+1\leq (2^{g}-1)+(2^{h}-1)+1\leq 2^{h}+2^{h}-1=2^{1+h}-1,$ i.e. the whole tree satisfies the property. • In case h ≤ g, the whole tree extends over 1 + g generations and shows p + q + 1 ≤ 2g + 1 − 1 persons by similar reasoning, i.e. the whole tree satisfies the property in this case also. Hence, by structural induction, each ancestor tree satisfies the property. As another, more formal example, consider the following property of lists : ${\text{EQ:}}\quad \operatorname {len} (L+\!+\ M)=\operatorname {len} (L)+\operatorname {len} (M)$ Here ++ denotes the list concatenation operation, len() the list length, and L and M are lists. In order to prove this, we need definitions for length and for the concatenation operation. Let (h:t) denote a list whose head (first element) is h and whose tail (list of remaining elements) is t, and let [] denote the empty list. The definitions for length and the concatenation operation are: ${\begin{array}{ll}{\text{LEN1:}}&\operatorname {len} ([\ ])=0\\{\text{LEN2:}}&\operatorname {len} (h:t)=1+\operatorname {len} (t)\\&\\{\text{APP1:}}&[\ ]+\!+\ list=list\\{\text{APP2:}}&(h:t)+\!+\ list=h:(t+\!+\ list)\end{array}}$ Our proposition P(l) is that EQ is true for all lists M when L is l. We want to show that P(l) is true for all lists l. We will prove this by structural induction on lists. First we will prove that P([]) is true; that is, EQ is true for all lists M when L happens to be the empty list []. Consider EQ: ${\begin{array}{rll}\operatorname {len} (L+\!+\ M)&=\operatorname {len} ([\ ]+\!+\ M)\\&=\operatorname {len} (M)&({\text{by APP1}})\\&=0+\operatorname {len} (M)\\&=\operatorname {len} ([\ ])+\operatorname {len} (M)&({\text{by LEN1}})\\&=\operatorname {len} (L)+\operatorname {len} (M)\\\end{array}}$ So this part of the theorem is proved; EQ is true for all M, when L is [], because the left-hand side and the right-hand side are equal. Next, consider any nonempty list I. Since I is nonempty, it has a head item, x, and a tail list, xs, so we can express it as (x:xs). The induction hypothesis is that EQ is true for all values of M when L is xs: ${\text{HYP:}}\quad \operatorname {len} (xs+\!+\ M)=\operatorname {len} (xs)+\operatorname {len} (M)$ We would like to show that if this is the case, then EQ is also true for all values of M when L = I = (x:xs). We proceed as before: ${\begin{array}{rll}\operatorname {len} (L)+\operatorname {len} (M)&=\operatorname {len} (x:xs)+\operatorname {len} (M)\\&=1+\operatorname {len} (xs)+\operatorname {len} (M)&({\text{by LEN2}})\\&=1+\operatorname {len} (xs+\!+\ M)&({\text{by HYP}})\\&=\operatorname {len} (x:(xs+\!+\ M))&({\text{by LEN2}})\\&=\operatorname {len} ((x:xs)+\!+\ M)&({\text{by APP2}})\\&=\operatorname {len} (L+\!+\ M)\end{array}}$ Thus, from structural induction, we obtain that P(L) is true for all lists L. Well-ordering Just as standard mathematical induction is equivalent to the well-ordering principle, structural induction is also equivalent to a well-ordering principle. If the set of all structures of a certain kind admits a well-founded partial order, then every nonempty subset must have a minimal element. (This is the definition of "well-founded".) The significance of the lemma in this context is that it allows us to deduce that if there are any counterexamples to the theorem we want to prove, then there must be a minimal counterexample. If we can show the existence of the minimal counterexample implies an even smaller counterexample, we have a contradiction (since the minimal counterexample isn't minimal) and so the set of counterexamples must be empty. As an example of this type of argument, consider the set of all binary trees. We will show that the number of leaves in a full binary tree is one more than the number of interior nodes. Suppose there is a counterexample; then there must exist one with the minimal possible number of interior nodes. This counterexample, C, has n interior nodes and l leaves, where n + 1 ≠ l. Moreover, C must be nontrivial, because the trivial tree has n = 0 and l = 1 and is therefore not a counterexample. C therefore has at least one leaf whose parent node is an interior node. Delete this leaf and its parent from the tree, promoting the leaf's sibling node to the position formerly occupied by its parent. This reduces both n and l by 1, so the new tree also has n + 1 ≠ l and is therefore a smaller counterexample. But by hypothesis, C was already the smallest counterexample; therefore, the supposition that there were any counterexamples to begin with must have been false. The partial ordering implied by 'smaller' here is the one that says that S < T whenever S has fewer nodes than T. See also • Coinduction • Initial algebra • Loop invariant, analog for loops References • Hopcroft, John E.; Rajeev Motwani; Jeffrey D. Ullman (2001). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Reading Mass: Addison-Wesley. ISBN 978-0-201-44124-6. • "Mathematical Logic - Video 01.08 - Generalized (Structural) Induction" on YouTube Early publications about structural induction include: • Burstall, R. M. (1969). "Proving Properties of Programs by Structural Induction". The Computer Journal. 12 (1): 41–48. doi:10.1093/comjnl/12.1.41. • Aubin, Raymond (1976), Mechanizing Structural Induction, EDI-INF-PHD, vol. 76–002, University of Edinburgh, hdl:1842/6649 • Huet, G.; Hullot, J. M. (1980). "Proofs by Induction in Equational Theories with Constructors" (PDF). 21st Ann. Symp. on Foundations of Computer Science. IEEE. pp. 96–107. • Rózsa Péter, Über die Verallgemeinerung der Theorie der rekursiven Funktionen für abstrakte Mengen geeigneter Struktur als Definitionsbereiche, Symposium International, Varsovie septembre (1959) (On the generalization of the theory of recursive functions for abstract quantities with suitable structures as domains). Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Structural stability In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself. Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion. In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability.[1] Definition Let G be an open domain in Rn with compact closure and smooth (n−1)-dimensional boundary. Consider the space X1(G) consisting of restrictions to G of C1 vector fields on Rn that are transversal to the boundary of G and are inward oriented. This space is endowed with the C1 metric in the usual fashion. A vector field F ∈ X1(G) is weakly structurally stable if for any sufficiently small perturbation F1, the corresponding flows are topologically equivalent on G: there exists a homeomorphism h: G → G which transforms the oriented trajectories of F into the oriented trajectories of F1. If, moreover, for any ε > 0 the homeomorphism h may be chosen to be C0 ε-close to the identity map when F1 belongs to a suitable neighborhood of F depending on ε, then F is called (strongly) structurally stable. These definitions extend in a straightforward way to the case of n-dimensional compact smooth manifolds with boundary. Andronov and Pontryagin originally considered the strong property. Analogous definitions can be given for diffeomorphisms in place of vector fields and flows: in this setting, the homeomorphism h must be a topological conjugacy. It is important to note that topological equivalence is realized with a loss of smoothness: the map h cannot, in general, be a diffeomorphism. Moreover, although topological equivalence respects the oriented trajectories, unlike topological conjugacy, it is not time-compatible. Thus, the relevant notion of topological equivalence is a considerable weakening of the naïve C1 conjugacy of vector fields. Without these restrictions, no continuous time system with fixed points or periodic orbits could have been structurally stable. Weakly structurally stable systems form an open set in X1(G), but it is unknown whether the same property holds in the strong case. Examples Necessary and sufficient conditions for the structural stability of C1 vector fields on the unit disk D that are transversal to the boundary and on the two-sphere S2 have been determined in the foundational paper of Andronov and Pontryagin. According to the Andronov–Pontryagin criterion, such fields are structurally stable if and only if they have only finitely many singular points (equilibrium states) and periodic trajectories (limit cycles), which are all non-degenerate (hyperbolic), and do not have saddle-to-saddle connections. Furthermore, the non-wandering set of the system is precisely the union of singular points and periodic orbits. In particular, structurally stable vector fields in two dimensions cannot have homoclinic trajectories, which enormously complicate the dynamics, as discovered by Henri Poincaré. Structural stability of non-singular smooth vector fields on the torus can be investigated using the theory developed by Poincaré and Arnaud Denjoy. Using the Poincaré recurrence map, the question is reduced to determining structural stability of diffeomorphisms of the circle. As a consequence of the Denjoy theorem, an orientation preserving C2 diffeomorphism ƒ of the circle is structurally stable if and only if its rotation number is rational, ρ(ƒ) = p/q, and the periodic trajectories, which all have period q, are non-degenerate: the Jacobian of ƒq at the periodic points is different from 1, see circle map. Dmitri Anosov discovered that hyperbolic automorphisms of the torus, such as the Arnold's cat map, are structurally stable. He then generalized this statement to a wider class of systems, which have since been called Anosov diffeomorphisms and Anosov flows. One celebrated example of Anosov flow is given by the geodesic flow on a surface of constant negative curvature, cf Hadamard billiards. History and significance Structural stability of the system provides a justification for applying the qualitative theory of dynamical systems to analysis of concrete physical systems. The idea of such qualitative analysis goes back to the work of Henri Poincaré on the three-body problem in celestial mechanics. Around the same time, Aleksandr Lyapunov rigorously investigated stability of small perturbations of an individual system. In practice, the evolution law of the system (i.e. the differential equations) is never known exactly, due to the presence of various small interactions. It is, therefore, crucial to know that basic features of the dynamics are the same for any small perturbation of the "model" system, whose evolution is governed by a certain known physical law. Qualitative analysis was further developed by George Birkhoff in the 1920s, but was first formalized with introduction of the concept of rough system by Andronov and Pontryagin in 1937. This was immediately applied to analysis of physical systems with oscillations by Andronov, Witt, and Khaikin. The term "structural stability" is due to Solomon Lefschetz, who oversaw translation of their monograph into English. Ideas of structural stability were taken up by Stephen Smale and his school in the 1960s in the context of hyperbolic dynamics. Earlier, Marston Morse and Hassler Whitney initiated and René Thom developed a parallel theory of stability for differentiable maps, which forms a key part of singularity theory. Thom envisaged applications of this theory to biological systems. Both Smale and Thom worked in direct contact with Maurício Peixoto, who developed Peixoto's theorem in the late 1950s. When Smale started to develop the theory of hyperbolic dynamical systems, he hoped that structurally stable systems would be "typical". This would have been consistent with the situation in low dimensions: dimension two for flows and dimension one for diffeomorphisms. However, he soon found examples of vector fields on higher-dimensional manifolds that cannot be made structurally stable by an arbitrarily small perturbation (such examples have been later constructed on manifolds of dimension three). This means that in higher dimensions, structurally stable systems are not dense. In addition, a structurally stable system may have transversal homoclinic trajectories of hyperbolic saddle closed orbits and infinitely many periodic orbits, even though the phase space is compact. The closest higher-dimensional analogue of structurally stable systems considered by Andronov and Pontryagin is given by the Morse–Smale systems. See also • Homeostasis • Self-stabilization • Superstabilization • Stability theory References 1. Rahman, Aminur; Blackmore, D. (2023). "The One-Dimensional Version of Peixoto's Structural Stability Theorem: A Calculus-Based Proof". SIAM Review. 65 (3): 869–886. arXiv:2302.04941. doi:10.1137/21M1426572. ISSN 0036-1445. • Andronov, Aleksandr A.; Lev S. Pontryagin (1988) [1937]. V. I. Arnold (ed.). "Грубые системы" [Coarse systems]. Geometric Methods in the Theory of Differential Equations. Grundlehren der Mathematischen Wissenschaften, 250. Springer-Verlag, New York. ISBN 0-387-96649-8. • D. V. Anosov (2001) [1994], "Rough system", Encyclopedia of Mathematics, EMS Press • Charles Pugh and Maurício Matos Peixoto (ed.). "Structural stability". Scholarpedia. Authority control: National • Israel • United States
Wikipedia
Algebraic structure In mathematics, an algebraic structure consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities, known as axioms, that these operations must satisfy. Algebraic structures Group-like • Group • Semigroup / Monoid • Rack and quandle • Quasigroup and loop • Abelian group • Magma • Lie group Group theory Ring-like • Ring • Rng • Semiring • Near-ring • Commutative ring • Domain • Integral domain • Field • Division ring • Lie ring Ring theory Lattice-like • Lattice • Semilattice • Complemented lattice • Total order • Heyting algebra • Boolean algebra • Map of lattices • Lattice theory Module-like • Module • Group with operators • Vector space • Linear algebra Algebra-like • Algebra • Associative • Non-associative • Composition algebra • Lie algebra • Graded • Bialgebra • Hopf algebra An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra;[1] this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category. Introduction Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, a + (b + c) = (a + b) + c and a(bc) = (ab)c are associative laws, and a + b = b + a and ab = ba are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law. Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem. In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses. Common axioms Equational axioms An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples. Commutativity An operation $*$ is commutative if $x*y=y*x$ for every x and y in the algebraic structure. Associativity An operation $*$ is associative if $(x*y)*z=x*(y*z)$ for every x, y and z in the algebraic structure. Left distributivity An operation $*$ is left distributive with respect to another operation $+$ if $x*(y+z)=(x*y)+(x*z)$ for every x, y and z in the algebraic structure (the second operation is denoted here as +, because the second operation is addition in many common examples). Right distributivity An operation $*$ is right distributive with respect to another operation $+$ if $(y+z)*x=(y*x)+(z*x)$ for every x, y and z in the algebraic structure. Distributivity An operation $*$ is distributive with respect to another operation $+$ if it is both left distributive and right distributive. If the operation $*$ is commutative, left and right distributivity are both equivalent to distributivity. Existential axioms Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all X there is y such that $f(X,y)=g(X,y)$", where X is a k-tuple of variables. Choosing a specific value of y for each value of X defines a function $\varphi :X\mapsto y,$ which can be viewed as an operation of arity k, and the axiom becomes the identity $f(X,\varphi (X))=g(X,\varphi (X)).$ The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation $x\mapsto -x.$ Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety. Here are some of the most common existential axioms. Identity element A binary operation $*$ has an identity element if there is an element e such that $x*e=x\quad {\text{and}}\quad e*x=x$ for all x in the structure. Here, the auxiliary operation is the operation of arity zero that has e as its result. Inverse element Given a binary operation $*$ that has an identity element e, an element x is invertible if it has an inverse element, that is, if there exists an element $\operatorname {inv} (x)$ such that $\operatorname {inv} (x)*x=e\quad {\text{and}}\quad x*\operatorname {inv} (x)=e.$ For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible. Non-equational axioms The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers ($\forall ,\exists $) that apply to elements (not to subsets) of the structure. Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation inv such that $\forall x,\quad x=0\quad {\text{or}}\quad x\cdot \operatorname {inv} (x)=1.$ The operation inv can be viewed either as a partial operation that is not defined for x = 0; or as an ordinary function whose value at 0 is arbitrary and must not be used. Common algebraic structures One set with operations Simple structures: no binary operation: • Set: a degenerate algebraic structure S having no operations. Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers. • Group: a monoid with a unary operation (inverse), giving rise to inverse elements. • Abelian group: a group whose binary operation is commutative. Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition. • Ring: a semiring whose additive monoid is an abelian group. • Division ring: a nontrivial ring in which division by nonzero elements is defined. • Commutative ring: a ring in which the multiplication operation is commutative. • Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element). Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law.[2] • Complete lattice: a lattice in which arbitrary meet and joins exist. • Bounded lattice: a lattice with a greatest element and least element. • Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice. • Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. Two sets with operations • Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations. • Vector space: a module where the ring R is a division ring or field. • Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication. • Inner product space: an F vector space V with a definite bilinear form V × V → F. Hybrid structures Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure. • Topological group: a group with a topology compatible with the group operation. • Lie group: a topological group with a compatible smooth manifold structure. • Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order. • Archimedean group: a linearly ordered group for which the Archimedean property holds. • Topological vector space: a vector space whose M has a compatible topology. • Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space. • Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure. • Vertex operator algebra • Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology. Universal algebra Main article: Universal algebra Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry). Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group. Some structures do not form varieties, because either: 1. It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity; 2. Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations. Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because $(1,0)\cdot (0,1)=(0,0)$, but fields do not have zero divisors. Category theory Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure. There are various concepts in category theory that try to capture the algebraic character of a context, for instance • algebraic category • essentially algebraic category • presentable category • locally presentable category • monadic functors and categories • universal property. Different meanings of "structure" In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set $A$", means that we have defined ring operations on the set $A$. For another example, the group $(\mathbb {Z} ,+)$ can be seen as a set $\mathbb {Z} $ that is equipped with an algebraic structure, namely the operation $+$. See also • Free object • Mathematical structure • Signature (logic) • Structure (mathematical logic) Notes 1. P.M. Cohn. (1981) Universal Algebra, Springer, p. 41. 2. Ringoids and lattices can be clearly distinguished despite both having two defining binary operations. In the case of ringoids, the two operations are linked by the distributive law; in the case of lattices, they are linked by the absorption law. Ringoids also tend to have numerical models, while lattices tend to have set-theoretic models. References • Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (2nd ed.), AMS Chelsea, ISBN 978-0-8218-1646-2 • Michel, Anthony N.; Herget, Charles J. (1993), Applied Algebra and Functional Analysis, New York: Dover Publications, ISBN 978-0-486-67598-5 • Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90578-3 Category theory • Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 • Taylor, Paul (1999), Practical foundations of mathematics, Cambridge University Press, ISBN 978-0-521-63107-5 External links • Jipsen's algebra structures. Includes many structures not mentioned here. • Mathworld page on abstract algebra. • Stanford Encyclopedia of Philosophy: Algebra by Vaughan Pratt. Algebra • Outline • History Areas • Abstract algebra • Algebraic geometry • Algebraic number theory • Category theory • Commutative algebra • Elementary algebra • Homological algebra • K-theory • Linear algebra • Multilinear algebra • Noncommutative algebra • Order theory • Representation theory • Universal algebra Basic concepts • Algebraic expression • Equation (Linear equation, Quadratic equation) • Function (Polynomial function) • Inequality (Linear inequality) • Operation (Addition, Multiplication) • Relation (Equivalence relation) • Variable Algebraic structures • Field (theory) • Group (theory) • Module (theory) • Ring (theory) • Vector space (Vector) Linear and multilinear algebra • Basis • Determinant • Eigenvalues and eigenvectors • Inner product space (Dot product) • Hilbert space • Linear map (Matrix) • Linear subspace (Affine space) • Norm (Euclidean norm) • Orthogonality (Orthogonal complement) • Rank • Trace Algebraic constructions • Composition algebra • Exterior algebra • Free object (Free group, ...) • Geometric algebra (Multivector) • Polynomial ring (Polynomial) • Quotient object (Quotient group, ...) • Symmetric algebra • Tensor algebra Topic lists • Algebraic structures • Abstract algebra topics • Linear algebra topics Glossaries • Field theory • Linear algebra • Order theory • Ring theory • Category • Mathematics portal • Wikibooks • Linear • Abstract • Wikiversity • Linear • Abstract Authority control: National • Germany • Czech Republic
Wikipedia