text
stringlengths
100
500k
subset
stringclasses
4 values
Structure constants In mathematics, the structure constants or structure coefficients of an algebra over a field are the coefficients of the basis expansion (into linear combination of basis vectors) of the products of basis vectors. Because the product operation in the algebra is bilinear, by linearity knowing the product of basis vectors allows to compute the product of any elements (just like a matrix allows to compute the action of the linear operator on any vector by providing the action of the operator on basis vectors). Therefore, the structure constants can be used to specify the product operation of the algebra (just like a matrix defines a linear operator). Given the structure constants, the resulting product is obtained by bilinearity and can be uniquely extended to all vectors in the vector space, thus uniquely determining the product for the algebra. Structure constants are used whenever an explicit form for the algebra must be given. Thus, they are frequently used when discussing Lie algebras in physics, as the basis vectors indicate specific directions in physical space, or correspond to specific particles (recall that Lie algebras are algebras over a field, with the bilinear product being given by the Lie bracket, usually defined via the commutator). Definition Given a set of basis vectors $\{\mathbf {e} _{i}\}$ for the underlying vector space of the algebra, the product operation is uniquely defined by the products of basis vectors: $\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\mathbf {c} _{ij}$. The structure constants or structure coefficients $c_{ij}^{\;k}$ are just the coeffiecients of $\mathbf {c} _{ij}$ in the same basis: $\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\mathbf {c} _{ij}=\sum _{k}c_{ij}^{\;\;k}\mathbf {e} _{k}$. Otherwise said they are the coefficients that express $\mathbf {c} _{ij}$ as linear combination of the basis vectors $\mathbf {e} _{k}$. The upper and lower indices are frequently not distinguished, unless the algebra is endowed with some other structure that would require this (for example, a pseudo-Riemannian metric, on the algebra of the indefinite orthogonal group so(p,q)). That is, structure constants are often written with all-upper, or all-lower indexes. The distinction between upper and lower is then a convention, reminding the reader that lower indices behave like the components of a dual vector, i.e. are covariant under a change of basis, while upper indices are contravariant. The structure constants obviously depend on the chosen basis. For Lie algebras, one frequently used convention for the basis is in terms of the ladder operators defined by the Cartan subalgebra; this is presented further down in the article, after some preliminary examples. Example: Lie algebras For a Lie algebra, the basis vectors are termed the generators of the algebra, and the product rather called the Lie bracket (often the Lie bracket is an additional product operation beyond the already existing product, thus necessitating a separate name). For two vectors $A$ and $B$ in the algebra, the Lie bracket is denoted $[A,B]$. Again, there is no particular need to distinguish the upper and lower indices; they can be written all up or all down. In physics, it is common to use the notation $T_{i}$ for the generators, and $f_{ab}^{\;\;c}$ or $f^{abc}$ (ignoring the upper-lower distinction) for the structure constants. The linear expansion of the Lie bracket of pairs of generators then looks like $[T_{a},T_{b}]=\sum _{c}f_{ab}^{\;\;c}T_{c}$. Again, by linear extension, the structure constants completely determine the Lie brackets of all elements of the Lie algebra. All Lie algebras satisfy the Jacobi identity. For the basis vectors, it can be written as $[T_{a},[T_{b},T_{c}]]+[T_{b},[T_{c},T_{a}]]+[T_{c},[T_{a},T_{b}]]=0$ and this leads directly to a corresponding identity in terms of the structure constants: $f_{ad}^{\;\;e}f_{bc}^{\;\;d}+f_{bd}^{\;\;e}f_{ca}^{\;\;d}+f_{cd}^{\;\;e}f_{ab}^{\;\;d}=0.$ The above, and the remainder of this article, make use of the Einstein summation convention for repeated indexes. The structure constants play a role in Lie algebra representations, and in fact, give exactly the matrix elements of the adjoint representation. The Killing form and the Casimir invariant also have a particularly simple form, when written in terms of the structure constants. The structure constants often make an appearance in the approximation to the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group. For small elements $X,Y$ of the Lie algebra, the structure of the Lie group near the identity element is given by $\exp(X)\exp(Y)\approx \exp(X+Y+{\tfrac {1}{2}}[X,Y]).$ Note the factor of 1/2. They also appear in explicit expressions for differentials, such as $e^{-X}de^{X}$; see Baker–Campbell–Hausdorff formula#Infinitesimal case for details. Lie algebra examples 𝔰𝔲(2) and 𝔰𝔬(3) The algebra ${\mathfrak {su}}(2)$ of the special unitary group SU(2) is three-dimensional, with generators given by the Pauli matrices $\sigma _{i}$. The generators of the group SU(2) satisfy the commutation relations (where $\varepsilon ^{abc}$ is the Levi-Civita symbol): $[\sigma _{a},\sigma _{b}]=2i\varepsilon ^{abc}\sigma _{c}$ where $\sigma _{1}={\begin{pmatrix}0&1\\1&0\end{pmatrix}},~~\sigma _{2}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}},~~\sigma _{3}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}$ In this case, the structure constants are $f^{abc}=2i\varepsilon ^{abc}$. Note that the constant 2i can be absorbed into the definition of the basis vectors; thus, defining $t_{a}=-i\sigma _{a}/2$, one can equally well write $[t_{a},t_{b}]=\varepsilon ^{abc}t_{c}$ Doing so emphasizes that the Lie algebra ${\mathfrak {su}}(2)$ of the Lie group SU(2) is isomorphic to the Lie algebra ${\mathfrak {so}}(3)$ of SO(3). This brings the structure constants into line with those of the rotation group SO(3). That is, the commutator for the angular momentum operators are then commonly written as $[L_{i},L_{j}]=\varepsilon ^{ijk}L_{k}$ where $L_{x}=L_{1}={\begin{pmatrix}0&0&0\\0&0&-1\\0&1&0\end{pmatrix}},~~L_{y}=L_{2}={\begin{pmatrix}0&0&1\\0&0&0\\-1&0&0\end{pmatrix}},~~L_{z}=L_{3}={\begin{pmatrix}0&-1&0\\1&0&0\\0&0&0\end{pmatrix}}$ are written so as to obey the right hand rule for rotations in 3-dimensional space. The difference of the factor of 2i between these two sets of structure constants can be infuriating, as it involves some subtlety. Thus, for example, the two-dimensional complex vector space can be given a real structure. This leads to two inequivalent two-dimensional fundamental representations of ${\mathfrak {su}}(2)$, which are isomorphic, but are complex conjugate representations; both, however, are considered to be real representations, precisely because they act on a space with a real structure.[1] In the case of three dimensions, there is only one three-dimensional representation, the adjoint representation, which is a real representation; more precisely, it is the same as its dual representation, shown above. That is, one has that the transpose is minus itself: $L_{k}^{T}=-L_{k}.$ In any case, the Lie groups are considered to be real, precisely because it is possible to write the structure constants so that they are purely real. 𝔰𝔲(3) A less trivial example is given by SU(3):[2] Its generators, T, in the defining representation, are: $T^{a}={\frac {\lambda ^{a}}{2}}.\,$ where $\lambda \,$, the Gell-Mann matrices, are the SU(3) analog of the Pauli matrices for SU(2): $\lambda ^{1}={\begin{pmatrix}0&1&0\\1&0&0\\0&0&0\end{pmatrix}}$ $\lambda ^{2}={\begin{pmatrix}0&-i&0\\i&0&0\\0&0&0\end{pmatrix}}$ $\lambda ^{3}={\begin{pmatrix}1&0&0\\0&-1&0\\0&0&0\end{pmatrix}}$ $\lambda ^{4}={\begin{pmatrix}0&0&1\\0&0&0\\1&0&0\end{pmatrix}}$ $\lambda ^{5}={\begin{pmatrix}0&0&-i\\0&0&0\\i&0&0\end{pmatrix}}$ $\lambda ^{6}={\begin{pmatrix}0&0&0\\0&0&1\\0&1&0\end{pmatrix}}$ $\lambda ^{7}={\begin{pmatrix}0&0&0\\0&0&-i\\0&i&0\end{pmatrix}}$ $\lambda ^{8}={\frac {1}{\sqrt {3}}}{\begin{pmatrix}1&0&0\\0&1&0\\0&0&-2\end{pmatrix}}.$ These obey the relations $\left[T^{a},T^{b}\right]=if^{abc}T^{c}\,$ $\{T^{a},T^{b}\}={\frac {1}{3}}\delta ^{ab}+d^{abc}T^{c}.\,$ The structure constants are totally antisymmetric. They are given by: $f^{123}=1\,$ $f^{147}=-f^{156}=f^{246}=f^{257}=f^{345}=-f^{367}={\frac {1}{2}}\,$ $f^{458}=f^{678}={\frac {\sqrt {3}}{2}},\,$ and all other $f^{abc}$ not related to these by permuting indices are zero. The d take the values: $d^{118}=d^{228}=d^{338}=-d^{888}={\frac {1}{\sqrt {3}}}\,$ $d^{448}=d^{558}=d^{668}=d^{778}=-{\frac {1}{2{\sqrt {3}}}}\,$ $d^{146}=d^{157}=-d^{247}=d^{256}=d^{344}=d^{355}=-d^{366}=-d^{377}={\frac {1}{2}}.\,$ 𝔰𝔲(N) For the general case of 𝔰𝔲(N), there exists closed formula to obtain the structure constant, without having to compute commutation and anti-commutation relations between the generators. We first define the $N^{2}-1$ generators of 𝔰𝔲(N), based on a generalisation of the Pauli matrices and of the Gell-Mann matrices (using the bra-ket notation). There are $N(N-1)/2$ symmetric matrices, ${\hat {T}}_{\alpha _{nm}}={\frac {\hbar }{2}}(|m\rangle \langle n|+|n\rangle \langle m|)$, $N(N-1)/2$ anti-symmetric matrices, ${\hat {T}}_{\beta _{nm}}=-i{\frac {\hbar }{2}}(|m\rangle \langle n|-|n\rangle \langle m|)$, and $N-1$ diagonal matrices, ${\hat {T}}_{\gamma _{n}}={\frac {\hbar }{\sqrt {2n(n-1)}}}{\Big (}\sum _{l=1}^{n-1}|l\rangle \langle l|+(1-n)|n\rangle \langle n|){\Big )}$. To differenciate those matrices we define the following indices: $\alpha _{nm}=n^{2}+2(m-n)-1$, $\beta _{nm}=n^{2}+2(m-n)$, $\gamma _{nm}=n^{2}-1$, with the condition $1\leq m<n\leq N$. All the non-zero totally anti-symmetric structure constants are $f^{\alpha _{nm}\alpha _{kn}\beta _{km}}=f^{\alpha _{nm}\alpha _{nk}\beta _{km}}=f^{\alpha _{nm}\alpha _{km}\beta _{kn}}={\frac {1}{2}}$, $f^{\beta _{nm}\beta _{km}\beta _{kn}}={\frac {1}{2}}$, $f^{\alpha _{nm}\beta _{nm}\gamma _{m}}=-{\sqrt {\frac {m-1}{2m}}},~f^{\alpha _{nm}\beta _{nm}\gamma _{n}}={\sqrt {\frac {n}{2(n-1)}}}$, $f^{\alpha _{nm}\beta _{nm}\gamma _{k}}={\sqrt {\frac {1}{2k(k-1)}}},~m<k<n$. All the non-zero totally symmetric structure constants are $d^{\alpha _{nm}\alpha _{kn}\alpha _{km}}=d^{\alpha _{nm}\beta _{kn}\beta _{km}}=d^{\alpha _{nm}\beta _{mk}\beta _{nk}}={\frac {1}{2}}$, $d^{\alpha _{nm}\beta _{nk}\beta _{km}}=-{\frac {1}{2}}$, $d^{\alpha _{nm}\alpha _{nm}\gamma _{m}}=d^{\beta _{nm}\beta _{nm}\gamma _{m}}=-{\sqrt {\frac {m-1}{2m}}}$, $d^{\alpha _{nm}\alpha _{nm}\gamma _{k}}=d^{\beta _{nm}\beta _{nm}\gamma _{k}}={\sqrt {\frac {1}{2k(k-1)}}},~m<k<n$, $d^{\alpha _{nm}\alpha _{nm}\gamma _{n}}=d^{\beta _{nm}\beta _{nm}\gamma _{n}}={\frac {2-n}{\sqrt {2n(n-1)}}}$, $d^{\alpha _{nm}\alpha _{nm}\gamma _{k}}=d^{\beta _{nm}\beta _{nm}\gamma _{k}}={\sqrt {\frac {2}{k(k-1)}}},~n<k$, $d^{\gamma _{n}\gamma _{k}\gamma _{k}}={\sqrt {\frac {2}{n(n-1)}}},~k<n$, $d^{\gamma _{n}\gamma _{n}\gamma _{n}}=(2-n){\sqrt {\frac {2}{n(n-1)}}}$. For more details on the derivation see [3] and.[4] Examples from other algebras Hall polynomials The Hall polynomials are the structure constants of the Hall algebra. Hopf algebras In addition to the product, the coproduct and the antipode of a Hopf algebra can be expressed in terms of structure constants. The connecting axiom, which defines a consistency condition on the Hopf algebra, can be expressed as a relation between these various structure constants. Applications • A Lie group is abelian exactly when all structure constants are 0. • A Lie group is real exactly when its structure constants are real. • The structure constants are completely anti-symmetric in all indices if and only if the Lie algebra is a direct sum of simple compact Lie algebras. • A nilpotent Lie group admits a lattice if and only if its Lie algebra admits a basis with rational structure constants: this is Malcev's criterion. Not all nilpotent Lie groups admit lattices; for more details, see also Raghunathan.[5] • In quantum chromodynamics, the symbol $G_{\mu \nu }^{a}\,$ represents the gauge covariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, Fμν, in quantum electrodynamics. It is given by:[6] $G_{\mu \nu }^{a}=\partial _{\mu }{\mathcal {A}}_{\nu }^{a}-\partial _{\nu }{\mathcal {A}}_{\mu }^{a}+gf^{abc}{\mathcal {A}}_{\mu }^{b}{\mathcal {A}}_{\nu }^{c}\,,$ where fabc are the structure constants of SU(3). Note that the rules to push-up or pull-down the a, b, or c indexes are trivial, (+,... +), so that fabc = fabc = fa bc whereas for the μ or ν indexes one has the non-trivial relativistic rules, corresponding e.g. to the metric signature (+ − − −). Choosing a basis for a Lie algebra One conventional approach to providing a basis for a Lie algebra is by means of the so-called "ladder operators" appearing as eigenvectors of the Cartan subalgebra. The construction of this basis, using conventional notation, is quickly sketched here. An alternative construction (the Serre construction) can be found in the article semisimple Lie algebra. Given a Lie algebra ${\mathfrak {g}}$, the Cartan subalgebra ${\mathfrak {h}}\subset {\mathfrak {g}}$ is the maximal Abelian subalgebra. By definition, it consists of those elements that commute with one-another. An orthonormal basis can be freely chosen on ${\mathfrak {h}}$; write this basis as $H_{1},\cdots ,H_{r}$ with $\langle H_{i},H_{j}\rangle =\delta _{ij}$ where $\langle \cdot ,\cdot \rangle $ is the inner product on the vector space. The dimension $r$ of this subalgebra is called the rank of the algebra. In the adjoint representation, the matrices $\mathrm {ad} (H_{i})$ are mutually commuting, and can be simultaneously diagonalized. The matrices $\mathrm {ad} (H_{i})$ have (simultaneous) eigenvectors; those with a non-zero eigenvalue $\alpha $ are conventionally denoted by $E_{\alpha }$. Together with the $H_{i}$ these span the entire vector space ${\mathfrak {g}}$. The commutation relations are then $[H_{i},H_{j}]=0\quad {\mbox{and}}\quad [H_{i},E_{\alpha }]=\alpha _{i}E_{\alpha }$ The eigenvectors $E_{\alpha }$ are determined only up to overall scale; one conventional normalization is to set $\langle E_{\alpha },E_{-\alpha }\rangle =1$ This allows the remaining commutation relations to be written as $[E_{\alpha },E_{-\alpha }]=\alpha _{i}H_{i}$ and $[E_{\alpha },E_{\beta }]=N_{\alpha ,\beta }E_{\alpha +\beta }$ with this last subject to the condition that the roots (defined below) $\alpha ,\beta $ sum to a non-zero value: $\alpha +\beta \neq 0$. The $E_{\alpha }$ are sometimes called ladder operators, as they have this property of raising/lowering the value of $\beta $. For a given $\alpha $, there are as many $\alpha _{i}$ as there are $H_{i}$ and so one may define the vector $\alpha =\alpha _{i}H_{i}$, this vector is termed a root of the algebra. The roots of Lie algebras appear in regular structures (for example, in simple Lie algebras, the roots can have only two different lengths); see root system for details. The structure constants $N_{\alpha ,\beta }$ have the property that they are non-zero only when $\alpha +\beta $ are a root. In addition, they are antisymmetric: $N_{\alpha ,\beta }=-N_{\beta ,\alpha }$ and can always be chosen such that $N_{\alpha ,\beta }=-N_{-\alpha ,-\beta }$ They also obey cocycle conditions:[7] $N_{\alpha ,\beta }=N_{\beta ,\gamma }=N_{\gamma ,\alpha }$ whenever $\alpha +\beta +\gamma =0$, and also that $N_{\alpha ,\beta }N_{\gamma ,\delta }+N_{\beta ,\gamma }N_{\alpha ,\delta }+N_{\gamma ,\alpha }N_{\beta ,\delta }=0$ whenever $\alpha +\beta +\gamma +\delta =0$. References 1. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. 2. Weinberg, Steven (1995). The Quantum Theory of Fields. Vol. 1 Foundations. Cambridge University Press. ISBN 0-521-55001-7. 3. Bossion, D.; Huo, P. (2021). "General Formulas of the Structure Constants in the 𝔰𝔲(N) Lie Algebra". arXiv:2108.07219 [math-ph]. 4. Bossion, D.; Ying, W.; Chowdhury, S. N.; Huo, P. (2022). "Non-adiabatic mapping dynamics in the phase space of the SU(N) Lie group". J. Chem. Phys. 157 (8): 084105. Bibcode:2022JChPh.157h4105B. doi:10.1063/5.0094893. PMID 36049982. S2CID 251187368. 5. Raghunathan, Madabusi S. (2012) [1972]. "2. Lattices in Nilpotent Lie Groups". Discrete Subgroups of Lie Groups. Springer. ISBN 978-3-642-86428-5. 6. Eidemüller, M.; Dosch, H.G.; Jamin, M. (2000) [1999]. "The field strength correlator from QCD sum rules". Nucl. Phys. B Proc. Suppl. 86 (1–3): 421–5. arXiv:hep-ph/9908318. Bibcode:2000NuPhS..86..421E. doi:10.1016/S0920-5632(00)00598-3. S2CID 18237543. 7. Cornwell, J.F. (1984). Group Theory In Physics. Vol. 2 Lie Groups and their applications. Academic Press. ISBN 0121898040. OCLC 969857292.
Wikipedia
Symmetric cone In mathematics, symmetric cones, sometimes called domains of positivity, are open convex self-dual cones in Euclidean space which have a transitive group of symmetries, i.e. invertible operators that take the cone onto itself. By the Koecher–Vinberg theorem these correspond to the cone of squares in finite-dimensional real Euclidean Jordan algebras, originally studied and classified by Jordan, von Neumann & Wigner (1934). The tube domain associated with a symmetric cone is a noncompact Hermitian symmetric space of tube type. All the algebraic and geometric structures associated with the symmetric space can be expressed naturally in terms of the Jordan algebra. The other irreducible Hermitian symmetric spaces of noncompact type correspond to Siegel domains of the second kind. These can be described in terms of more complicated structures called Jordan triple systems, which generalize Jordan algebras without identity.[1] Definitions A convex cone C in a finite-dimensional real inner product space V is a convex set invariant under multiplication by positive scalars. It spans the subspace C – C and the largest subspace it contains is C ∩ (−C). It spans the whole space if and only if it contains a basis. Since the convex hull of the basis is a polytope with non-empty interior, this happens if and only if C has non-empty interior. The interior in this case is also a convex cone. Moreover, an open convex cone coincides with the interior of its closure, since any interior point in the closure must lie in the interior of some polytope in the original cone. A convex cone is said to be proper if its closure, also a cone, contains no subspaces. Let C be an open convex cone. Its dual is defined as $\displaystyle {C^{*}=\{X:(X,Y)>0\,\,\mathrm {for} \,\,Y\in {\overline {C}}\}.}$ It is also an open convex cone and C** = C.[2] An open convex cone C is said to be self-dual if C* = C. It is necessarily proper, since it does not contain 0, so cannot contain both X and −X. The automorphism group of an open convex cone is defined by $\displaystyle {\mathrm {Aut} \,C=\{g\in \mathrm {GL} (V)|gC=C\}.}$ Clearly g lies in Aut C if and only if g takes the closure of C onto itself. So Aut C is a closed subgroup of GL(V) and hence a Lie group. Moreover, Aut C* = (Aut C)*, where g* is the adjoint of g. C is said to be homogeneous if Aut C acts transitively on C. The open convex cone C is called a symmetric cone if it is self-dual and homogeneous. Group theoretic properties • If C is a symmetric cone, then Aut C is closed under taking adjoints. • The identity component Aut0 C acts transitively on C. • The stabilizers of points are maximal compact subgroups, all conjugate, and exhaust the maximal compact subgroups of Aut C. • In Aut0 C the stabilizers of points are maximal compact subgroups, all conjugate, and exhaust the maximal compact subgroups of Aut0 C. • The maximal compact subgroups of Aut0 C are connected. • The component group of Aut C is isomorphic to the component group of a maximal compact subgroup and therefore finite. • Aut C ∩ O(V) and Aut0 C ∩ O(V) are maximal compact subgroups in Aut C and Aut0 C. • C is naturally a Riemannian symmetric space isomorphic to G / K where G = Aut0 C. The Cartan involution is defined by σ(g)=(g*)−1, so that K = G ∩ O(V). Spectral decomposition in a Euclidean Jordan algebra See also: Formally real Jordan algebra In their classic paper, Jordan, von Neumann & Wigner (1934) studied and completely classified a class of finite-dimensional Jordan algebras, that are now called either Euclidean Jordan algebras or formally real Jordan algebras. Definition Let E be a finite-dimensional real vector space with a symmetric bilinear product operation $\displaystyle {E\times E\rightarrow E,\,\,\,a,b\mapsto ab=ba,}$ with an identity element 1 such that a1 = a for a in A and a real inner product (a,b) for which the multiplication operators L(a) defined by L(a)b = ab on E are self-adjoint and satisfy the Jordan relation $\displaystyle {L(a)L(a^{2})=L(a^{2})L(a).}$ As will turn out below, the condition on adjoints can be replaced by the equivalent condition that the trace form Tr L(ab) defines an inner product. The trace form has the advantage of being manifestly invariant under automorphisms of the Jordan algebra, which is thus a closed subgroup of O(E) and thus a compact Lie group. In practical examples, however, it is often easier to produce an inner product for which the L(a) are self-adjoint than verify directly positive-definiteness of the trace form. (The equivalent original condition of Jordan, von Neumann and Wigner was that if a sum of squares of elements vanishes then each of those elements has to vanish.[3]) Power associativity From the Jordan condition it follows that the Jordan algebra is power associative, i.e. the Jordan subalgebra generated by any single element a in E is actually an associative commutative algebra. Thus, defining an inductively by an = a (an−1), the following associativity relation holds: $\displaystyle {a^{m}a^{n}=a^{m+n},}$ so the subalgebra can be identified with R[a], polynomials in a. In fact polarizing of the Jordan relation—replacing a by a + tb and taking the coefficient of t—yields $\displaystyle {2L(ab)L(a)+L(a^{2})L(b)=2L(a)L(b)L(a)+L(a^{2}b).}$ This identity implies that L(am) is a polynomial in L(a) and L(a2) for all m. In fact, assuming the result for lower exponents than m, $\displaystyle {a^{2}a^{m-1}=a^{m-1}(a^{2})=L(a^{m-1})L(a)a=L(a)L(a^{m-1})a=L(a)a^{m}=a^{m+1}.}$ Setting b = am – 1 in the polarized Jordan identity gives: $\displaystyle {L(a^{m+1})=2L(a^{m})L(a)+L(a^{2})L(a^{m-1})-2L(a)^{2}L(a^{m-1}),}$ a recurrence relation showing inductively that L(am + 1) is a polynomial in L(a) and L(a2). Consequently, if power-associativity holds when the first exponent is ≤ m, then it also holds for m+1 since $\displaystyle {L(a^{m+1})a^{n}=2L(a)L(a^{m})a^{n}+L(a^{2})L(a^{m-1})a^{n}-2L(a)^{2}L(a^{m-1})a^{n}=a^{m+n+1}.}$ Idempotents and rank An element e in E is called an idempotent if e2 = e. Two idempotents are said to be orthogonal if ef = 0. This is equivalent to orthogonality with respect to the inner product, since (ef,ef) = (e,f). In this case g = e + f is also an idempotent. An idempotent g is called primitive or minimal if it cannot be written as a sum of non-zero orthogonal idempotents. If e1, ..., em are pairwise orthogonal idempotents then their sum is also an idempotent and the algebra they generate consists of all linear combinations of the ei. It is an associative algebra. If e is an idempotent, then 1 − e is an orthogonal idempotent. An orthogonal set of idempotents with sum 1 is said to be a complete set or a partition of 1. If each idempotent in the set is minimal it is called a Jordan frame. Since the number of elements in any orthogonal set of idempotents is bounded by dim E, Jordan frames exist. The maximal number of elements in a Jordan frame is called the rank r of E. Spectral decomposition The spectral theorem states that any element a can be uniquely written as $\displaystyle {a=\sum \lambda _{i}e_{i},}$ where the idempotents ei's are a partition of 1 and the λi, the eigenvalues of a, are real and distinct. In fact let E0 = R[a] and let T be the restriction of L(a) to E0. T is self-adjoint and has 1 as a cyclic vector. So the commutant of T consists of polynomials in T (or a). By the spectral theorem for self-adjoint operators, $\displaystyle {T=\sum \lambda _{i}P_{i}}$ where the Pi are orthogonal projections on E0 with sum I and the λi's are the distinct real eigenvalues of T. Since the Pi's commute with T and are self-adjoint, they are given by multiplication elements ei of R[a] and thus form a partition of 1. Uniqueness follows because if fi is a partition of 1 and a = Σ μi fi, then with p(t)=Π (t - μj) and pi = p/(t − μi), fi = pi(a)/pi(μi). So the fi's are polynomials in a and uniqueness follows from uniqueness of the spectral decomposition of T. The spectral theorem implies that the rank is independent of the Jordan frame. For a Jordan frame with k minimal idempotents can be used to construct an element a with k distinct eigenvalues. As above the minimal polynomial p of a has degree k and R[a] has dimension k. Its dimension is also the largest k such that Fk(a) ≠ 0 where Fk(a) is the determinant of a Gram matrix: $\displaystyle {F_{k}(a)=\det _{0\leq m,n<k}(a^{m},a^{n}).}$ So the rank r is the largest integer k for which Fk is not identically zero on E. In this case, as a non-vanishing polynomial, Fr is non-zero on an open dense subset of E. the regular elements. Any other a is a limit of regular elements a(n). Since the operator norm of L(x) gives an equivalent norm on E, a standard compactness argument shows that, passing to a subsequence if necessary, the spectral idempotents of the a(n) and their corresponding eigenvalues are convergent. The limit of Jordan frames is a Jordan frame, since a limit of non-zero idempotents yields a non-zero idempotent by continuity of the operator norm. It follows that every Jordan frame is made up of r minimal idempotents. If e and f are orthogonal idempotents, the spectral theorem shows that e and f are polynomials in a = e − f, so that L(e) and L(f) commute. This can be seen directly from the polarized Jordan identity which implies L(e)L(f) = 2 L(e)L(f)L(e). Commutativity follows by taking adjoints. Spectral decomposition for an idempotent If e is a non-zero idempotent then the eigenvalues of L(e) can only be 0, 1/2 and 1, since taking a = b = e in the polarized Jordan identity yields $\displaystyle {2L(e)^{3}-3L(e)^{2}+L(e)=0.}$ In particular the operator norm of L(e) is 1 and its trace is strictly positive. There is a corresponding orthogonal eigenspace decomposition of E $\displaystyle {E=E_{0}(e)\oplus E_{1/2}(e)\oplus E_{1}(e),}$ where, for a in E, Eλ(a) denotes the λ-eigenspace of L(a). In this decomposition E1(e) and E0(e) are Jordan algebras with identity elements e and 1 − e. Their sum E1(e) ⊕ E0(e) is a direct sum of Jordan algebras in that any product between them is zero. It is the centralizer subalgebra of e and consists of all a such that L(a) commutes with L(e). The subspace E1/2(e) is a module for the centralizer of e, the centralizer module, and the product of any two elements in it lies in the centralizer subalgebra. On the other hand, if $\displaystyle {U=8L(e)^{2}-8L(e)+I,}$ then U is self-adjoint equal to 1 on the centralizer algebra and −1 on the centralizer module. So U2 = I and the properties above show that $\displaystyle {\sigma (x)=Ux}$ defines an involutive Jordan algebra automorphism σ of E. In fact the Jordan algebra and module properties follow by replacing a and b in the polarized Jordan identity by e and a. If ea = 0, this gives L(e)L(a) = 2L(e)L(a)L(e). Taking adjoints it follows that L(a) commutes with L(e). Similarly if (1 − e)a = 0, L(a) commutes with I − L(e) and hence L(e). This implies the Jordan algebra and module properties. To check that a product of elements in the module lies in the algebra, it is enough to check this for squares: but if L(e)a = 1/2 a, then ea = 1/2 a, so L(a)2 + L(a2)L(e) = 2L(a)L(e)L(a) + L(a2e). Taking adjoints it follows that L(a2) commutes with L(e), which implies the property for squares. Trace form The trace form is defined by $\displaystyle {\tau (a,b)=\mathrm {Tr} \,L(ab).}$ It is an inner product since, for non-zero a = Σ λi ei, $\displaystyle {\tau (a,a)=\sum \lambda _{i}^{2}\mathrm {Tr} \,L(e_{i})>0.}$ The polarized Jordan identity can be polarized again by replacing a by a + tc and taking the coefficient of t. A further anyisymmetrization in a and c yields: $\displaystyle {L(a(bc)-(ab)c)=[[L(a),L(b)],L(c)].}$ Applying the trace to both sides $\displaystyle {\tau (a,bc)=\tau (ba,c),}$ so that L(b) is self-adjoint for the trace form. Simple Euclidean Jordan algebras The classification of simple Euclidean Jordan algebras was accomplished by Jordan, von Neumann & Wigner (1934), with details of the one exceptional algebra provided in the article immediately following theirs by Albert (1934). Using the Peirce decomposition, they reduced the problem to an algebraic problem involving multiplicative quadratic forms already solved by Hurwitz. The presentation here, following Faraut & Koranyi (1994), using composition algebras or Euclidean Hurwitz algebras, is a shorter version of the original derivation. Central decomposition If E is a Euclidean Jordan algebra an ideal F in E is a linear subspace closed under multiplication by elements of E, i.e. F is invariant under the operators L(a) for a in E. If P is the orthogonal projection onto F it commutes with the operators L(a), In particular F⊥ = (I − P)E is also an ideal and E = F ⊕ F⊥. Furthermore, if e = P(1), then P = L(e). In fact for a in E $\displaystyle {ea=ae=L(a)P(1)=P(L(a)1)=P(a),}$ so that ea = a for a in F and 0 for a in F⊥. In particular e and 1 − e are orthogonal idempotents with L(e) = P and L(1 − e) = I − P. e and 1 − e are the identities in the Euclidean Jordan algebras F and F⊥. The idempotent e is central in E, where the center of E is defined to be the set of all z such that L(z) commutes with L(a) for all a. It forms a commutative associative subalgebra. Continuing in this way E can be written as a direct sum of minimal ideals $\displaystyle {E=\oplus E_{i}.}$ If Pi is the projection onto Ei and ei = Pi(1) then Pi = L(ei). The ei's are orthogonal with sum 1 and are the identities in Ei. Minimality forces Ei to be simple, i.e. to have no non-trivial ideals. For since L(ei) commutes with all L(a)'s, any ideal F ⊂ Ei would be invariant under E since F = eiF. Such a decomposition into a direct sum of simple Euclidean algebras is unique. If E = ⊕ Fj is another decomposition, then Fj=⊕ eiFj. By minimality only one of the terms here is non-zero so equals Fj. By minimality the corresponding Ei equals Fj, proving uniqueness. In this way the classification of Euclidean Jordan algebras is reduced to that of simple ones. For a simple algebra E all inner products for which the operators L(a) are self adjoint are proportional. Indeed, any other product has the form (Ta, b) for some positive self-adjoint operator commuting with the L(a)'s. Any non-zero eigenspace of T is an ideal in A and therefore by simplicity T must act on the whole of E as a positive scalar. List of all simple Euclidean Jordan algebras • Let Hn(R) be the space of real symmetric n by n matrices with inner product (a,b) = Tr ab and Jordan product a ∘ b = 1/2(ab + ba). Then Hn(R) is a simple Euclidean Jordan algebra of rank n for n ≥ 3. • Let Hn(C) be the space of complex self-adjoint n by n matrices with inner product (a,b) = Re Tr ab* and Jordan product a ∘ b = 1/2(ab + ba). Then Hn(C) is a simple Euclidean Jordan algebra of rank n for n ≥ 3. • Let Hn(H) be the space of self-adjoint n by n matrices with entries in the quaternions, inner product (a,b) = Re Tr ab* and Jordan product a ∘ b = 1/2(ab + ba). Then Hn(H) is a simple Euclidean Jordan algebra of rank n for n ≥ 3. • Let V be a finite dimensional real inner product space and set E = V ⊕ R with inner product (u⊕λ,v⊕μ) =(u,v) + λμ and product (u⊕λ)∘(v⊕μ)=( μu + λv) ⊕ [(u,v) + λμ]. This is a Euclidean Jordan algebra of rank 2, called a spin factor. • The above examples in fact give all the simple Euclidean Jordan algebras, except for one exceptional case H3(O), the self-adjoint matrices over the octonions or Cayley numbers, another rank 3 simple Euclidean Jordan algebra of dimension 27 (see below). The Jordan algebras H2(R), H2(C), H2(H) and H2(O) are isomorphic to spin factors V ⊕ R where V has dimension 2, 3, 5 and 9, respectively: that is, one more than the dimension of the relevant division algebra. Peirce decomposition See also: Peirce decomposition Let E be a simple Euclidean Jordan algebra with inner product given by the trace form τ(a)= Tr L(a). The proof that E has the above form rests on constructing an analogue of matrix units for a Jordan frame in E. The following properties of idempotents hold in E. • An idempotent e is minimal in E if and only if E1(e) has dimension one (so equals Re). Moreover E1/2(e) ≠ (0). In fact the spectral projections of any element of E1(e) lie in E so if non-zero must equal e. If the 1/2 eigenspace vanished then E1(e) = Re would be an ideal. • If e and f are non-orthogonal minimal idempotents, then there is a period 2 automorphism σ of E such that σe=f, so that e and f have the same trace. • If e and f are orthogonal minimal idempotents then E1/2(e) ∩ E1/2(f) ≠ (0). Moreover, there is a period 2 automorphism σ of E such that σe=f, so that e and f have the same trace, and for any a in this intersection, a2 = 1/2 τ(e) |a|2 (e + f). • All minimal idempotents in E are in the same orbit of the automorphism group so have the same trace τ0. • If e, f, g are three minimal orthogonal idempotents, then for a in E1/2(e) ∩ E1/2(f) and b in E1/2(f) ∩ E1/2(g), L(a)2 b = 1/8 τ0 |a|2 b and |ab|2 = 1/8 τ0 |a|2|b|2. Moreover, E1/2(e) ∩ E1/2(f) ∩ E1/2(g) = (0). • If e1, ..., er and f1, ..., fr are Jordan frames in E, then there is an automorphism α such that αei = fi. • If (ei) is a Jordan frame and Eii = E1(ei) and Eij = E1/2(ei) ∩ E1/2(ej), then E is the orthogonal direct sum the Eii's and Eij's. Since E is simple, the Eii's are one-dimensional and the subspaces Eij are all non-zero for i ≠ j. • If a = Σ αi ei for some Jordan frame (ei), then L(a) acts as αi on Eii and (αi + αi)/2 on Eij. Reduction to Euclidean Hurwitz algebras Main article: Euclidean Hurwitz algebra Let E be a simple Euclidean Jordan algebra. From the properties of the Peirce decomposition it follows that: • If E has rank 2, then it has the form V ⊕ R for some inner product space V with Jordan product as described above. • If E has rank r > 2, then there is a non-associative unital algebra A, associative if r > 3, equipped with an inner product satisfying (ab,ab)= (a,a)(b,b) and such that E = Hr(A). (Conjugation in A is defined by a* = −a + 2(a,1)1.) Such an algebra A is called a Euclidean Hurwitz algebra. In A if λ(a)b = ab and ρ(a)b = ba, then: • the involution is an antiautomorphism, i.e. (a b)*=b* a* • a a* = ‖ a ‖2 1 = a* a • λ(a*) = λ(a)*, ρ(a*) = ρ(a)*, so that the involution on the algebra corresponds to taking adjoints • Re(a b) = Re(b a) if Re x = (x + x*)/2 = (x, 1)1 • Re(a b) c = Re a(b c) • λ(a2) = λ(a)2, ρ(a2) = ρ(a)2, so that A is an alternative algebra. By Hurwitz's theorem A must be isomorphic to R, C, H or O. The first three are associative division algebras. The octonions do not form an associative algebra, so Hr(O) can only give a Jordan algebra for r = 3. Because A is associative when A = R, C or H, it is immediate that Hr(A) is a Jordan algebra for r ≥ 3. A separate argument, given originally by Albert (1934), is required to show that H3(O) with Jordan product a∘b = 1/2(ab + ba) satisfies the Jordan identity [L(a),L(a2)] = 0. There is a later more direct proof using the Freudenthal diagonalization theorem due to Freudenthal (1951): he proved that given any matrix in the algebra Hr(A) there is an algebra automorphism carrying the matrix onto a diagonal matrix with real entries; it is then straightforward to check that [L(a),L(b)] = 0 for real diagonal matrices.[4] Exceptional and special Euclidean Jordan algebras The exceptional Euclidean Jordan algebra E= H3(O) is called the Albert algebra. The Cohn–Shirshov theorem implies that it cannot be generated by two elements (and the identity). This can be seen directly. For by Freudenthal's diagonalization theorem one element X can be taken to be a diagonal matrix with real entries and the other Y to be orthogonal to the Jordan subalgebra generated by X. If all the diagonal entries of X are distinct, the Jordan subalgebra generated by X and Y is generated by the diagonal matrices and three elements $\displaystyle {Y_{1}={\begin{pmatrix}0&0&0\\0&0&y_{1}\\0&y_{1}^{*}&0\end{pmatrix}},\,\,\,Y_{2}={\begin{pmatrix}0&0&y_{2}^{*}\\0&0&0\\y_{2}&0&0\end{pmatrix}},\,\,\,Y_{3}={\begin{pmatrix}0&y_{3}&0\\y_{3}^{*}&0&0\\0&0&0\end{pmatrix}}.}$ It is straightforward to verify that the real linear span of the diagonal matrices, these matrices and similar matrices with real entries form a unital Jordan subalgebra. If the diagonal entries of X are not distinct, X can be taken to be the primitive idempotent e1 with diagonal entries 1, 0 and 0. The analysis in Springer & Veldkamp (2000) then shows that the unital Jordan subalgebra generated by X and Y is proper. Indeed, if 1 − e1 is the sum of two primitive idempotents in the subalgebra, then, after applying an automorphism of E if necessary, the subalgebra will be generated by the diagonal matrices and a matrix orthogonal to the diagonal matrices. By the previous argument it will be proper. If 1 - e1 is a primitive idempotent, the subalgebra must be proper, by the properties of the rank in E. A Euclidean algebra is said to be special if its central decomposition contains no copies of the Albert algebra. Since the Albert algebra cannot be generated by two elements, it follows that a Euclidean Jordan algebra generated by two elements is special. This is the Shirshov–Cohn theorem for Euclidean Jordan algebras.[5] The classification shows that each non-exceptional simple Euclidean Jordan algebra is a subalgebra of some Hn(R). The same is therefore true of any special algebra. On the other hand, as Albert (1934) showed, the Albert algebra H3(O) cannot be realized as a subalgebra of Hn(R) for any n.[6] Indeed, let π is a real-linear map of E = H3(O) into the self-adjoint operators on V = Rn with π(ab) = 1/2(π(a)π(b) + π(b)π(a)) and π(1) = I. If e1, e2, e3 are the diagonal minimal idempotents then Pi = π(ei are mutually orthogonal projections on V onto orthogonal subspaces Vi. If i ≠ j, the elements eij of E with 1 in the (i,j) and (j,i) entries and 0 elsewhere satisfy eij2 = ei + ej. Moreover, eijejk = 1/2 eik if i, j and k are distinct. The operators Tij are zero on Vk (k ≠ i, j) and restrict to involutions on Vi ⊕ Vj interchanging Vi and Vj. Letting Pij = Pi Tij Pj and setting Pii = Pi, the (Pij) form a system of matrix units on V, i.e. Pij* = Pji, Σ Pii = I and PijPkm = δjk Pim. Let Ei and Eij be the subspaces of the Peirce decomposition of E. For x in O, set πij = Pij π(xeij), regarded as an operator on Vi. This does not depend on j and for x, y in O $\displaystyle {\pi _{ij}(xy)=\pi _{ij}(x)\pi _{ij}(y),\,\,\,\pi _{ij}(1)=I.}$ Since every x in O has a right inverse y with xy = 1, the map πij is injective. On the other hand, it is an algebra homomorphism from the nonassociative algebra O into the associative algebra End Vi, a contradiction.[7] Positive cone in a Euclidean Jordan algebra Definition When (ei) is a partition of 1 in a Euclidean Jordan algebra E, the self-adjoint operators L(ei) commute and there is a decomposition into simultaneous eigenspaces. If a = Σ λi ei the eigenvalues of L(a) have the form Σ εi λi is 0, 1/2 or 1. The ei themselves give the eigenvalues λi. In particular an element a has non-negative spectrum if and only if L(a) has non-negative spectrum. Moreover, a has positive spectrum if and only if L(a) has positive spectrum. For if a has positive spectrum, a - ε1 has non-negative spectrum for some ε > 0. The positive cone C in E is defined to be the set of elements a such that a has positive spectrum. This condition is equivalent to the operator L(a) being a positive self-adjoint operator on E. • C is a convex cone in E because positivity of a self-adjoint operator T— the property that its eigenvalues be strictly positive—is equivalent to (Tv,v) > 0 for all v ≠ 0. • C is an open because the positive matrices are open in the self-adjoint matrices and L is a continuous map: in fact, if the lowest eigenvalue of T is ε > 0, then T + S is positive whenever ||S|| < ε. • The closure of C consists of all a such that L(a) is non-negative or equivalently a has non-negative spectrum. From the elementary properties of convex cones, C is the interior of its closure and is a proper cone. The elements in the closure of C are precisely the square of elements in E. • C is self-dual. In fact the elements of the closure of C are just set of all squares x2 in E, the dual cone is given by all a such that (a,x2) > 0. On the other hand, (a,x2) = (L(a)x,x), so this is equivalent to the positivity of L(a).[8] Quadratic representation To show that the positive cone C is homogeneous, i.e. has a transitive group of automorphisms, a generalization of the quadratic action of self-adjoint matrices on themselves given by X ↦ YXY has to be defined. If Y is invertible and self-adjoint, this map is invertible and carries positive operators onto positive operators. For a in E, define an endomorphism of E, called the quadratic representation, by[9] $\displaystyle {Q(a)=2L(a)^{2}-L\left(a^{2}\right).}$ Note that for self-adjoint matrices L(X)Y = 1/2(XY + YX), so that Q(X)Y = XYX. An element a in E is called invertible if it is invertible in R[a]. If b denotes the inverse, then the spectral decomposition of a shows that L(a) and L(b) commute. In fact a is invertible if and only if Q(a) is invertible. In that case $\displaystyle {Q(a)^{-1}a=a^{-1},\,\,\,Q\left(a^{-1}\right)=Q(a)^{-1}.}$ Indeed, if Q(a) is invertible it carries R[a] onto itself. On the other hand, Q(a)1 = a2, so $\displaystyle {\left(Q(a)^{-1}a\right)a=aQ(a)^{-1}a=L(a)Q(a)^{-1}a=Q(a)^{-1}a^{2}=1.}$ Taking b = a−1 in the polarized Jordan identity, yields $\displaystyle {Q(a)L\left(a^{-1}\right)=L(a).}$ Replacing a by its inverse, the relation follows if L(a) and L(a−1) are invertible. If not it holds for a + ε1 with ε arbitrarily small and hence also in the limit. • If a and b are invertible then so is Q(a)b and it satisfies the inverse identity: $\displaystyle {(Q(a)b)^{-1}=Q\left(a^{-1}\right)b^{-1}.}$ • The quadratic representation satisfies the following fundamental identity: $\displaystyle {Q(Q(a)b)=Q(a)Q(b)Q(a).}$ • In particular, taking b to be non-negative powers of a, it follows by induction that $\displaystyle {Q\left(a^{m}\right)=Q(a)^{m}.}$ These identities are easy to prove in a finite-dimensional (Euclidean) Jordan algebra (see below) or in a special Jordan algebra, i.e. the Jordan algebra defined by a unital associative algebra.[10] They are valid in any Jordan algebra. This was conjectured by Jacobson and proved in Macdonald (1960): Macdonald showed that if a polynomial identity in three variables, linear in the third, is valid in any special Jordan algebra, then it holds in all Jordan algebras.[11] In fact for c in A and F(a) a function on A with values in End A, let DcF(a) be the derivative at t = 0 of F(a + tc). Then $\displaystyle {c=D_{c}\left(Q(a)a^{-1}\right)=2\left[(L(a)L(c)+L(c)L(a)-L(ac))a^{-1}\right]+Q(a)D_{c}\left(a^{-1}\right)=2c+Q(a)D_{c}\left(a^{-1}\right).}$ The expression in square brackets simplifies to c because L(a) commutes with L(a−1). Thus $\displaystyle {D_{c}\left(a^{-1}\right)=-Q(a)^{-1}c.}$ Applying Dc to L(a−1)Q(a) = L(a) and acting on b = c−1 yields $\displaystyle {(Q(a)b)\left(Q\left(a^{-1}\right)b^{-1}\right)=1.}$ On the other hand, L(Q(a)b) is invertible on an open dense set where Q(a)b must also be invertible with $\displaystyle {(Q(a)b)^{-1}=Q\left(a^{-1}\right)b^{-1}.}$ Taking the derivative Dc in the variable b in the expression above gives $\displaystyle {-Q(Q(a)b)^{-1}Q(a)c=-Q(a)^{-1}Q(b)^{-1}c.}$ This yields the fundamental identity for a dense set of invertible elements, so it follows in general by continuity. The fundamental identity implies that c = Q(a)b is invertible if a and b are invertible and gives a formula for the inverse of Q(c). Applying it to c gives the inverse identity in full generality. Finally it can be verified immediately from the definitions that, if u = 1 − 2e for some idempotent e, then Q(u) is the period 2 automorphism constructed above for the centralizer algebra and module of e. Homogeneity of positive cone If a is an invertible operator and b is in the positive cone C, then so is Q(a)b. The proof of this relies on elementary continuity properties of eigenvalues of self-adjoint operators.[12] Let T(t) (α ≤ t ≤ β) be a continuous family of self-adjoint operators on E with T(α) positive and T(β) having a negative eigenvalue. Set S(t)= –T(t) + M with M > 0 chosen so large that S(t) is positive for all t. The operator norm ||S(t)|| is continuous. It is less than M for t = α and greater than M for t = β. So for some α < s < β, ||S(s)|| = M and there is a vector v ≠ 0 such that S(s)v = Mv. In particular T(s)v = 0, so that T(s) is not invertible. Suppose that x = Q(a)b does not lie in C. Let b(t) = (1 − t) + tb with 0 ≤ t ≤ 1. By convexity b(t) lies in C. Let x(t) = Q(a)b(t) and X(t) = L(x(t)). If X(t) is invertible for all t with 0 ≤ t ≤ 1, the eigenvalue argument gives a contradiction since it is positive at t = 0 and has negative eigenvalues at t = 1. So X(s) has a zero eigenvalue for some s with 0 < s ≤ 1: X(s)w = 0 with w ≠ 0. By the properties of the quadratic representation, x(t) is invertible for all t. Let Y(t) = L(x(t)2). This is a positive operator since x(t)2 lies in C. Let T(t) = Q(x(t)), an invertible self-adjoint operator by the invertibility of x(t). On the other hand, T(t) = 2X(t)2 - Y(t). So (T(s)w,w) < 0 since Y(s) is positive and X(s)w = 0. In particular T(s) has some negative eigenvalues. On the other hand, the operator T(0) = Q(a2) = Q(a)2 is positive. By the eigenvalue argument, T(t) has eigenvalue 0 for some t with 0 < t < s, a contradiction. It follows that the linear operators Q(a) with a invertible, and their inverses, take the cone C onto itself. Indeed, the inverse of Q(a) is just Q(a−1). Since Q(a)1 = a2, there is thus a transitive group of symmetries: C is a symmetric cone. Euclidean Jordan algebra of a symmetric cone Construction Let C be a symmetric cone in the Euclidean space E. As above, Aut C denotes the closed subgroup of GL(E) taking C (or equivalently its closure) onto itself. Let G = Aut0 C be its identity component. K = G ∩ O(E). It is a maximal compact subgroup of G and the stabilizer of a point e in C. It is connected. The group G is invariant under taking adjoints. Let σg =(g*)−1, period 2 automorphism. Thus K is the fixed point subgroup of σ. Let ${\mathfrak {g}}$ be the Lie algebra of G. Thus σ induces an involution of ${\mathfrak {g}}$ and hence a ±1 eigenspace decomposition $\displaystyle {{\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}},}$ where ${\mathfrak {k}}$, the +1 eigenspace, is the Lie algebra of K and ${\mathfrak {p}}$ is the −1 eigenspace. Thus ${\mathfrak {p}}$⋅e is an affine subspace of dimension dim ${\mathfrak {p}}$. Since C = G/K is an open subspace of E, it follows that dim E = dim ${\mathfrak {p}}$ and hence ${\mathfrak {p}}$⋅e = E. For a in E let L(a) be the unique element of ${\mathfrak {p}}$ such that L(a)e = a. Define a ∘ b = L(a)b. Then E with its Euclidean structure and this bilinear product is a Euclidean Jordan algebra with identity 1 = e. The convex cone coincides C with the positive cone of E.[13] Since the elements of ${\mathfrak {p}}$ are self-adjoint, L(a)* = L(a). The product is commutative since [${\mathfrak {p}}$, ${\mathfrak {p}}$] ⊆ ${\mathfrak {k}}$ annihilates e, so that ab = L(a)L(b)e = L(b)L(a)e = ba. It remains to check the Jordan identity [L(a),L(a2)] = 0. The associator is given by [a,b,c] = [L(a),L(c)]b. Since [L(a),L(c)] lies in ${\mathfrak {k}}$ it follows that [[L(a),L(c)],L(b)] = L([a,b,c]). Making both sides act on c yields $\displaystyle {[a,b^{2},c]=2[a,b,c]b.}$ On the other hand, $\displaystyle {([b^{2},a,b],c)=(b^{2}(ba)-b(b^{2}a),c)=-(b^{2},[a,b,c])}$ and likewise $\displaystyle {([b^{2},a,b],c)=(b,[a,b^{2},c]).}$ Combining these expressions gives $\displaystyle {([b^{2},a,b],c)=0,}$ which implies the Jordan identity. Finally the positive cone of E coincides with C. This depends on the fact that in any Euclidean Jordan algebra E $\displaystyle {Q(e^{a})=e^{2L(a)}.}$ In fact Q(ea) is a positive operator, Q(eta) is a one-parameter group of positive operators: this follows by continuity for rational t, where it is a consequence of the behaviour of powers So it has the form exp tX for some self-adjoint operator X. Taking the derivative at 0 gives X = 2L(a). Hence the positive cone is given by all elements $\displaystyle {e^{2a}=Q(e^{a})1=e^{2L(a)}1=e^{X}\cdot 1,}$ with X in ${\mathfrak {p}}$. Thus the positive cone of E lies inside C. Since both are self-dual, they must coincide. Automorphism groups and trace form Let C be the positive cone in a simple Euclidean Jordan algebra E. Aut C is the closed subgroup of GL(E) taking C (or its closure) onto itself. Let G = Aut0 C be the identity component of Aut C and let K be the closed subgroup of G fixing 1. From the group theoretic properties of cones, K is a connected compact subgroup of G and equals the identity component of the compact Lie group Aut E. Let ${\mathfrak {g}}$ and ${\mathfrak {k}}$ be the Lie algebras of G and K. G is closed under taking adjoints and K is the fixed point subgroup of the period 2 automorphism σ(g) = (g*)−1. Thus K = G ∩ SO(E). Let ${\mathfrak {p}}$ be the −1 eigenspace of σ. • ${\mathfrak {k}}$ consists of derivations of E that are skew-adjoint for the inner product defined by the trace form. • [[L(a),L(c)],L(b)] = L([a,b,c]). • If a and b are in E, then D = [L(a),L(b)] is a derivation of E, so lies in ${\mathfrak {k}}$. These derivations span ${\mathfrak {k}}$. • If a is in C, then Q(a) lies in G. • C is the connected component of the open set of invertible elements of E containing 1. It consists of exponentials of elements of E and the exponential map gives a diffeomorphism of E onto C. • The map a ↦ L(a) gives an isomorphism of E onto ${\mathfrak {p}}$ and eL(a) = Q(ea/2). This space of such exponentials coincides with P the positive self-adjoint elements in G. • For g in G and a in E, Q(g(a)) = g Q(a) g*. Cartan decomposition • G = P ⋅ K = K ⋅ P and the decomposition g = pk corresponds to the polar decomposition in GL(E). • If (ei) is a Jordan frame in E, then the subspace ${\mathfrak {a}}$ of ${\mathfrak {p}}$ spanned by L(ei) is maximal Abelian in ${\mathfrak {p}}$. A = exp ${\mathfrak {a}}$ is the Abelian subgroup of operators Q(a) where a = Σ λi ei with λi > 0. A is closed in P and hence G. If b =Σ μi ei with μi > 0, then Q(ab)=Q(a)Q(b). • ${\mathfrak {p}}$ and P are the union of the K translates of ${\mathfrak {a}}$ and A. Iwasawa decomposition for cone If E has Peirce decomposition relative to the Jordan frame (ei) $\displaystyle {E=\bigoplus _{i\leq j}E_{ij},}$ then ${\mathfrak {a}}$ is diagonalized by this decomposition with L(a) acting as (αi + αj)/2 on Eij, where a = Σ αi ei. Define the closed subgroup S of G by $\displaystyle {S=\{g\in G|gE_{ij}\subseteq \bigoplus _{(p,q)\geq (i,j)}E_{pq}\},}$ where the ordering on pairs p ≤ q is lexicographic. S contains the group A, since it acts as scalars on Eij. If N is the closed subgroup of S such that nx = x modulo ⊕(p,q) > (i,j) Epq, then S = AN = NA, a semidirect product with A normalizing N. Moreover, G has the following Iwasawa decomposition: $\displaystyle {G=KAN.}$ For i ≠ j let $\displaystyle {{\mathfrak {g}}_{ij}=\{X\in {\mathfrak {g}}:[L(a),X]={1 \over 2}(\alpha _{i}-\alpha _{j})X,\,\,\,\mathrm {for} \,\,\,a=\sum \alpha _{i}e_{i}\}.}$ Then the Lie algebra of N is $\displaystyle {{\mathfrak {n}}=\bigoplus _{i<j}{\mathfrak {g}}_{ij},\,\,\,\,{\mathfrak {g}}_{ij}=\{L(a)+2[L(a),L(e_{i})]:a\in E_{ij}\}.}$ Taking ordered orthonormal bases of the Eij gives a basis of E, using the lexicographic order on pairs (i,j). The group N is lower unitriangular and its Lie algebra lower triangular. In particular the exponential map is a polynomial mapping of ${\mathfrak {n}}$ onto N, with polynomial inverse given by the logarithm. Complexification of a Euclidean Jordan algebra Definition of complexification Let E be a Euclidean Jordan algebra. The complexification EC = E ⊕ iE has a natural conjugation operation (a + ib)* = a − ib and a natural complex inner product and norm. The Jordan product on E extends bilinearly to EC, so that (a + ib)(c + id) = (ac − bd) + i(ad + bc). If multiplication is defined by L(a)b = ab then the Jordan axiom $\displaystyle {[L(a),L(a^{2})]=0}$ still holds by analytic continuation. Indeed, the identity above holds when a is replaced by a + tb for t real; and since the left side is then a polynomial with values in End EC vanishing for real t, it vanishes also t complex. Analytic continuation also shows that all for the formulas involving power-associativity for a single element a in E, including recursion formulas for L(am), also hold in EC. Since for b in E, L(b) is still self-adjoint on EC, the adjoint relation L(a*) = L(a)* holds for a in EC. Similarly the symmetric bilinear form β(a,b) = (a,b*) satisfies β(ab,c) = β(b,ac). If the inner product comes from the trace form, then β(a,b) = Tr L(ab). For a in EC, the quadratic representation is defined as before by Q(a)=2L(a)2 − L(a2). By analytic continuation the fundamental identity still holds: $\displaystyle {Q(Q(a)b)=Q(a)Q(b)Q(a),\,\,\,Q(a^{m})=Q(a)^{m}\,\,(m\geq 0).}$ An element a in E is called invertible if it is invertible in C[a]. Power associativity shows that L(a) and L(a−1) commute. Moreover, a−1 is invertible with inverse a. As in E, a is invertible if and only if Q(a) is invertible. In that case $\displaystyle {Q(a)^{-1}a=a^{-1},\,\,\,Q(a^{-1})=Q(a)^{-1}.}$ Indeed, as for E, if Q(a) is invertible it carries C[a] onto itself, while Q(a)1 = a2, so $\displaystyle {(Q(a)^{-1}a)a=aQ(a)^{-1}a=L(a)Q(a)^{-1}a=Q(a)^{-1}a^{2}=1,}$ so a is invertible. Conversely if a is invertible, taking b = a−2 in the fundamental identity shows that Q(a) is invertible. Replacing a by a−1 and b by a then shows that its inverse is Q(a−1). Finally if a and b are invertible then so is c = Q(a)b and it satisfies the inverse identity: $\displaystyle {(Q(a)b)^{-1}=Q(a^{-1})b^{-1}.}$ Invertibility of c follows from the fundamental formula which gives Q(c) = Q(a)Q(b)Q(a). Hence $\displaystyle {c^{-1}=Q(c)^{-1}c=Q(a)^{-1}Q(b)^{-1}b=Q(a)^{-1}b^{-1}.}$ The formula $\displaystyle {Q(e^{a})=e^{2L(a)}}$ also follows by analytic continuation. Complexification of automorphism group Aut EC is the complexification of the compact Lie group Aut E in GL(EC). This follows because the Lie algebras of Aut EC and Aut E consist of derivations of the complex and real Jordan algebras EC and E. Under the isomorphism identifying End EC with the complexification of End E, the complex derivations is identified with the complexification of the real derivations.[14] Structure groups The Jordan operator L(a) are symmetric with respect to the trace form, so that L(a)t = L(a) for a in EC. The automorphism groups of E and EC consist of invertible real and complex linear operators g such that L(ga) = gL(a)g−1 and g1 = 1. Aut EC is the complexification of Aut E. Since an automorphism g preserves the trace form, g−1 = gt. The structure groups of E and EC consist of invertible real and complex linear operators g such that $\displaystyle {Q(ga)=gQ(a)g^{t}.}$ They form groups Γ(E) and Γ(EC) with Γ(E) ⊂ Γ(EC). • The structure group is closed under taking transposes g ↦ gt and adjoints g ↦ g*. • The structure group contains the automorphism group. The automorphism group can be identified with the stabilizer of 1 in the structure group. • If a is invertible, Q(a) lies in the structure group. • If g is in the structure group and a is invertible, ga is also invertible with (ga)−1 = (gt)−1a−1. • If E is simple, Γ(E) = Aut C × {±1}, Γ(E) ∩ O(E) = Aut E × {±1} and the identity component of Γ(E) acts transitively on C. • Γ(EC) is the complexification of Γ(E), which has Lie algebra ${\mathfrak {k}}\oplus {\mathfrak {p}}$. • The structure group Γ(EC) acts transitively on the set of invertible elements in EC. • Every g in Γ(EC) has the form g = h Q(a) with h an automorphism and a invertible. The unitary structure group Γu(EC) is the subgroup of Γ(EC) consisting of unitary operators, so that Γu(EC) = Γ(EC) ∩ U(EC). • The stabilizer of 1 in Γu(EC) is Aut E. • Every g in Γu(EC) has the form g = h Q(u) with h in Aut E and u invertible in EC with u* = u−1. • Γ(EC) is the complexification of Γu(EC), which has Lie algebra ${\mathfrak {k}}\oplus i{\mathfrak {p}}$. • The set S of invertible elements u such that u* = u−1 can be characterized equivalently either as those u for which L(u) is a normal operator with uu* = 1 or as those u of the form exp ia for some a in E. In particular S is connected. • The identity component of Γu(EC) acts transitively on S • g in GL(EC) is in the unitary structure group if and only if gS = S • Given a Jordan frame (ei) and v in EC, there is an operator u in the identity component of Γu(EC) such that uv = Σ αi ei with αi ≥ 0. If v is invertible, then αi > 0. Given a frame (ei) in a Euclidean Jordan algebra E, the restricted Weyl group can be identified with the group of operators on ⊕ R ei arising from elements in the identity component of Γu(EC) that leave ⊕ R ei invariant. Spectral norm Let E be a Euclidean Jordan algebra with the inner product given by the trace form. Let (ei) be a fixed Jordan frame in E. For given a in EC choose u in Γu(EC) such that ua = Σ αi ei with αi ≥ 0. Then the spectral norm ||a|| = max αi is independent of all choices. It is a norm on EC with $\displaystyle {\|a^{*}\|=\|a\|,\,\,\,\|\{a,a^{*},a\}\|=\|a\|^{3}.}$ In addition ||a||2 is given by the operator norm of Q(a) on the inner product space EC. The fundamental identity for the quadratic representation implies that ||Q(a)b|| ≤ ||a||2||b||. The spectral norm of an element a is defined in terms of C[a] so depends only on a and not the particular Euclidean Jordan algebra in which it is calculated.[15] The compact set S is the set of extreme points of the closed unit ball ||x|| ≤ 1. Each u in S has norm one. Moreover, if u = eia and v = eib, then ||uv|| ≤ 1. Indeed, by the Cohn–Shirshov theorem the unital Jordan subalgebra of E generated by a and b is special. The inequality is easy to establish in non-exceptional simple Euclidean Jordan algebras, since each such Jordan algebra and its complexification can be realized as a subalgebra of some Hn(R) and its complexification Hn(C) ⊂ Mn(C). The spectral norm in Hn(C) is the usual operator norm. In that case, for unitary matrices U and V in Mn(C), clearly ||1/2(UV + VU)|| ≤ 1. The inequality therefore follows in any special Euclidean Jordan algebra and hence in general.[16] On the other hand, by the Krein–Milman theorem, the closed unit ball is the (closed) convex span of S.[17] It follows that ||L(u)|| = 1, in the operator norm corresponding to either the inner product norm or spectral norm. Hence ||L(a)|| ≤ ||a|| for all a, so that the spectral norm satisfies $\displaystyle {\|ab\|\leq \|a\|\cdot \|b\|.}$ It follows that EC is a Jordan C* algebra.[18] Complex simple Jordan algebras The complexification of a simple Euclidean Jordan algebra is a simple complex Jordan algebra which is also separable, i.e. its trace form is non-degenerate. Conversely, using the existence of a real form of the Lie algebra of the structure group, it can be shown that every complex separable simple Jordan algebra is the complexification of a simple Euclidean Jordan algebra.[19] To verify that the complexification of a simple Euclidean Jordan algebra E has no ideals, note that if F is an ideal in EC then so too is F⊥, the orthogonal complement for the trace norm. As in the real case, J = F⊥ ∩ F must equal (0). For the associativity property of the trace form shows that F⊥ is an ideal and that ab = 0 if a and b lie in J. Hence J is an ideal. But if z is in J, L(z) takes EC into J and J into (0). Hence Tr L(z) = 0. Since J is an ideal and the trace form degenerate, this forces z = 0. It follows that EC = F ⊕ F⊥. If P is the corresponding projection onto F, it commutes with the operators L(a) and F⊥ = (I − P)EC. is also an ideal and E = F ⊕ F⊥. Furthermore, if e = P(1), then P = L(e). In fact for a in E $\displaystyle {ea=ae=L(a)P(1)=P(L(a)1)=P(a),}$ so that ea = a for a in F and 0 for a in F⊥. In particular e and 1 − e are orthogonal central idempotents with L(e) = P and L(1 − e) = I − P. So simplicity follows from the fact that the center of EC is the complexification of the center of E. Symmetry groups of bounded domain and tube domain According to the "elementary approach" to bounded symmetric space of Koecher,[20] Hermitian symmetric spaces of noncompact type can be realized in the complexification of a Euclidean Jordan algebra E as either the open unit ball for the spectral norm, a bounded domain, or as the open tube domain T = E + iC, where C is the positive open cone in E. In the simplest case where E = R, the complexification of E is just C, the bounded domain corresponds to the open unit disk and the tube domain to the upper half plane. Both these spaces have transitive groups of biholomorphisms given by Möbius transformations, corresponding to matrices in SU(1,1) or SL(2,R). They both lie in the Riemann sphere C ∪ {∞}, the standard one-point compactification of C. Moreover, the symmetry groups are all particular cases of Möbius transformations corresponding to matrices in SL(2,C). This complex Lie group and its maximal compact subgroup SU(2) act transitively on the Riemann sphere. The groups are also algebraic. They have distinguished generating subgroups and have an explicit description in terms of generators and relations. Moreover, the Cayley transform gives an explicit Möbius transformation from the open disk onto the upper half plane. All these features generalize to arbitrary Euclidean Jordan algebras.[21] The compactification and complex Lie group are described in the next section and correspond to the dual Hermitian symmetric space of compact type. In this section only the symmetries of and between the bounded domain and tube domain are described. Jordan frames provide one of the main Jordan algebraic techniques to describe the symmetry groups. Each Jordan frame gives rise to a product of copies of R and C. The symmetry groups of the corresponding open domains and the compactification—polydisks and polyspheres—can be deduced from the case of the unit disk, the upper halfplane and Riemann sphere. All these symmetries extend to the larger Jordan algebra and its compactification. The analysis can also be reduced to this case because all points in the complex algebra (or its compactification) lie in an image of the polydisk (or polysphere) under the unitary structure group. Definitions Let E be a Euclidean Jordan algebra with complexification A = EC = E + iE. The unit ball or disk D in A is just the convex bounded open set of elements a such the ||a|| < 1, i.e. the unit ball for the spectral norm. The tube domain T in A is the unbounded convex open set T = E + iC, where C is the open positive cone in E. Möbius transformations The group SL(2,C) acts by Möbius transformations on the Riemann sphere C ∪ {∞}, the one-point compactification of C. If g in SL(2,C) is given by the matrix $\displaystyle {g={\begin{pmatrix}\alpha &\beta \\\gamma &\delta \end{pmatrix}},}$ then $\displaystyle {g(z)=(\alpha z+\beta )(\gamma z+\delta )^{-1}.}$ Similarly the group SL(2,R) acts by Möbius transformations on the circle R ∪ {∞}, the one-point compactification of R. Let k = R or C. Then SL(2,k) is generated by the three subgroups of lower and upper unitriangular matrices, L and U', and the diagonal matrices D. It is also generated by the lower (or upper) unitriangular matrices, the diagonal matrices and the matrix $\displaystyle {J={\begin{pmatrix}0&1\\-1&0\end{pmatrix}}.}$ The matrix J corresponds to the Möbius transformation j(z) = −z−1 and can be written $\displaystyle {J={\begin{pmatrix}1&0\\-1&1\end{pmatrix}}{\begin{pmatrix}1&1\\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\-1&1\end{pmatrix}}.}$ The Möbius transformations fixing ∞ are just the upper triangular matrices B = UD = DU. If g does not fix ∞, it sends ∞ to a finite point a. But then g can be composed with an upper unitriangular matrix to send a to 0 and then with J to send 0 to infinity. This argument gives the one of the simplest examples of the Bruhat decomposition: $\displaystyle {\mathbf {SL} (2,k)=\mathbf {B} \cup \mathbf {B} \cdot J\cdot \mathbf {B} ,}$ the double coset decomposition of SL(2,k). In fact the union is disjoint and can be written more precisely as $\displaystyle {\mathbf {SL} (2,k)=\mathbf {B} \cup \mathbf {B} \cdot J\cdot \mathbf {U} ,}$ where the product occurring in the second term is direct. Now let $\displaystyle {T(\beta )={\begin{pmatrix}1&\beta \\0&1\end{pmatrix}}.}$ Then $\displaystyle {{\begin{pmatrix}\alpha &0\\0&\alpha ^{-1}\end{pmatrix}}=JT(\alpha ^{-1})JT(\alpha )JT(\alpha ^{-1}).}$ It follows SL(2,k) is generated by the group of operators T(β) and J subject to the following relations: • β ↦ T(β) is an additive homomorphism • α ↦ D(α) = JT(α−1)JT(α)JT(α−1) is a multiplicative homomorphism • D(−1) = J • D(α)T(β)D(α)−1 = T(α2β) • JD(α)J−1 = D(α)−1 The last relation follows from the definition of D(α). The generator and relations above is fact gives a presentation of SL(2,k). Indeed, consider the free group Φ generated by J and T(β) with J of order 4 and its square central. This consists of all products T(β1)JT(β2)JT(β3)J ... T(βm)J for m ≥ 0. There is a natural homomorphism of Φ onto SL(2,k). Its kernel contain the normal subgroup Δ generated by the relations above. So there is a natural homomorphism of Φ/Δ onto SL(2,k). To show that it is injective it suffices to show that the Bruhat decomposition also holds in Φ/Δ. It is enough to prove the first version, since the more precise version follows from the commutation relations between J and D(α). The set B ∪ B J B is invariant under inversion, contains operators T(β) and J, so it is enough to show it is invariant under multiplication. By construction it is invariant under multiplication by B. It is invariant under multiplication by J because of the defining equation for D(α).[22] In particular the center of SL(2,k) consists of the scalar matrices ±I and it is the only non-trivial normal subgroup of SL(2,k), so that PSL(2,k) = SL(2,k)/{±I} is simple.[23] In fact if K is a normal subgroup, then the Bruhat decomposition implies that B is a maximal subgroup, so that either K is contained in B or KB = SL(2,k). In the first case K fixes one point and hence every point of k ∪ {∞}, so lies in the center. In the second case, the commutator subgroup of SL(2,k) is the whole group, since it is the group generated by lower and upper unitriangular matrices and the fourth relation shows that all such matrices are commutators since [T(β),D(α)] = T(β − α2β). Writing J = kb with k in K and b in B, it follows that L = k U k−1. Since U and L generate the whole group, SL(2,k) = KU. But then SL(2,k)/K ≅ U/U ∩ K. The right hand side here is Abelian while the left hand side is its own commutator subgroup. Hence this must be the trivial group and K = SL(2,k). Given an element a in the complex Jordan algebra A = EC, the unital Jordan subalgebra C[a] is associative and commutative. Multiplication by a defines an operator on C[a] which has a spectrum, namely its set of complex eigenvalues. If p(t) is a complex polynomial, then p(a) is defined in C[a]. It is invertible in A if and only if it is invertible in C[a], which happen precisely when p does not vanish on the spectrum of a. This permits rational functions of a to be defined whenever the function is defined on the spectrum of a. If F and G are rational functions with G and F∘G defined on a, then F is defined on G(a) and F(G(a)) = (F∘G)(a). This applies in particular to complex Möbius transformations which can be defined by g(a) = (αa + β1)(γa + δ1)−1. They leave C[a] invariant and, when defined, the group composition law holds. (In the next section complex Möbius transformations will be defined on the compactification of A.)[24] Given a primitive idempotent e in E with Peirce decomposition $\displaystyle {E=E_{1}(e)\oplus E_{1/2}(e)\oplus E_{0}(e),\,\,\,\,A=A_{1}(e)\oplus A_{1/2}(e)\oplus A_{0}(e).}$ the action of SL(2,C) by Möbius transformations on E1(e) = C e can be extended to an action on A so that the action leaves invariant the components Ai(e) and in particular acts trivially on E0(e).[25] If P0 is the projection onto A0(e), the action is given be the formula $\displaystyle {g(ze\oplus x_{1/2}\oplus x_{0})={\alpha z+\beta \over \gamma z+\delta }\cdot e\oplus (\gamma z+\delta )^{-1}x_{1/2}\oplus x_{0}-(\gamma z+\delta )^{-1}P_{0}(x_{1/2}^{2}).}$ For a Jordan frame of primitive idempotents e1, ..., em, the actions of SL(2,C) associated with different ei commute, thus giving an action of SL(2,C)m. The diagonal copy of SL(2,C) gives again the action by Möbius transformations on A. Cayley transform See also: Cayley transform The Möbius transformation defined by $\displaystyle {C(z)=i{1+z \over 1-z}=-i+{2i \over 1-z}}$ is called the Cayley transform. Its inverse is given by $\displaystyle {P(w)={w-i \over w+i}=1-{2i \over w+i}.}$ The inverse Cayley transform carries the real line onto the circle with the point 1 omitted. It carries the upper halfplane onto the unit disk and the lower halfplane onto the complement of the closed unit disk. In operator theory the mapping T ↦ P(T) takes self-adjoint operators T onto unitary operators U not containing 1 in their spectrum. For matrices this follows because unitary and self-adjoint matrices can be diagonalized and their eigenvalues lie on the unit circle or real line. In this finite-dimensional setting the Cayley transform and its inverse establish a bijection between the matrices of operator norm less than one and operators with imaginary part a positive operator. This is the special case for A = Mn(C) of the Jordan algebraic result, explained below, which asserts that the Cayley transform and its inverse establish a bijection between the bounded domain D and the tube domain T. In the case of matrices, the bijection follows from resolvant formulas.[26] In fact if the imaginary part of T is positive, then T + iI is invertible since $\displaystyle {\|(T+iI)x\|^{2}=\|(T-iI)x\|^{2}+4(\mathrm {Im} (T)x,x).}$ In particular, setting y = (T + iI)x, $\displaystyle {\|y\|^{2}=\|P(T)y\|^{2}+4(\mathrm {Im} (T)x,x).}$ Equivalently $\displaystyle {I-P(T)^{*}P(T)=4(T^{*}-iI)^{-1}[\mathrm {Im} \,T](T+iI)^{-1}}$ is a positive operator, so that ||P(T)|| < 1. Conversely if ||U|| < 1 then I − U is invertible and $\displaystyle {\mathrm {Im} \,C(U)=(2i)^{-1}[C(U)-C(U)^{*}]=(1-U^{*})^{-1}[I-U^{*}U](I-U)^{-1}.}$ Since the Cayley transform and its inverse commute with the transpose, they also establish a bijection for symmetric matrices. This corresponds to the Jordan algebra of symmetric complex matrices, the complexification of Hn(R). In A = EC the above resolvant identities take the following form:[27] $\displaystyle {Q(1-u^{*})Q(C(u)+C(u^{*}))Q(1-u)=-4B(u^{*},u)}$ and equivalently $\displaystyle {4Q(\mathrm {Im} \,a)=Q(a^{*}-i)B(P(a)^{*},P(a))Q(a+i),}$ where the Bergman operator B(x,y) is defined by B(x,y) = I − 2R(x,y) + Q(x)Q(y) with R(x,y) = [L(x),L(y)] + L(xy). The inverses here are well defined. In fact in one direction 1 − u is invertible for ||u|| < 1: this follows either using the fact that the norm satisfies ||ab|| ≤ ||a|| ||b||; or using the resolvant identity and the invertibility of B(u*,u) (see below). In the other direction if the imaginary part of a is in C then the imaginary part of L(a) is positive definite so that a is invertible. This argument can be applied to a + i, so it also invertible. To establish the correspondence, it is enough to check it when E is simple. In that case it follows from the connectivity of T and D and because: • For x in E, Q(x) is a positive operator if and only if x or −x lies in C • B(a*,a) is a positive operator if and only if a or its inverse (if invertible) lies in D The first criterion follows from the fact that the eigenvalues of Q(x) are exactly λiλj if the eigenvalues of x are λi. So the λi are either all positive or all negative. The second criterion follows from the fact that if a = u Σ αi ei = ux with αi ≥ 0 and u in Γu(EC), then B(a*,a) = u*Q(1 − x2)u has eigenvalues (1 − αi2)(1 − αj2). So the αi are either all less than one or all greater than one. The resolvant identity is a consequence of the following identity for a and b invertible $\displaystyle {Q(a)Q(a^{-1}+b^{-1})Q(b)=Q(a+b).}$ In fact in this case the relations for a quadratic Jordan algebra imply $\displaystyle {R(a,b)=2Q(a)Q(a^{-1},b)=2Q(a,b^{-1})Q(b)}$ so that $\displaystyle {B(a,b)=Q(a)Q(a^{-1}-b)=Q(b^{-1}-a)Q(b).}$ The equality of the last two terms implies the identity, replacing b by −b−1. Now set a = 1 − x and b = 1 − y. The resolvant identity is a special case of the more following more general identity: $\displaystyle {Q(1-x)Q(C(x)+C(y))Q(1-y)=-4B(x,y).}$ In fact $\displaystyle {C(x)+C(y)=-2i(1-a^{-1}-b^{-1}),}$ so the identity is equivalent to $\displaystyle {Q(a)Q(1-a^{-1}-b^{-1})Q(b)=B(1-a,1-b).}$ Using the identity above together with Q(c)L(c−1) = L(c), the left hand side equals Q(a)Q(b) + Q(a + b) − 2L(a)Q(b) − 2Q(a)L(b). The right hand side equals 2L(a)L(b) + 2L(b)L(a) − 2L(ab) − 2L(a)Q(b) − 2Q(a)L(b) + Q(a)Q(b) + Q(a) + Q(b). These are equal because of the formula 1/2[Q(a + b) − Q(a) − Q(b)] = L(a)L(b) + L(b)L(a) − L(ab). Automorphism group of bounded domain The Möbius transformations in SU(1,1) carry the bounded domain D onto itself. If a lies in the bounded domain D, then a − 1 is invertible. Since D is invariant under multiplication by scalars of modulus ≤ 1, it follows that a − λ is invertible for |λ| ≥ 1. Hence for ||a|| ≤ 1, a − λ is invertible for |λ| > 1. It follows that the Möbius transformation ga is defined for ||a|| ≤ 1 and g in SU(1,1). Where defined it is injective. It is holomorphic on D. By the maximum modulus principle, to show that g maps D onto D it suffices to show it maps S onto itself. For in that case g and its inverse preserve D so must be surjective. If u = eix with x = Σ ξiei in E, then gu lies in ⊕ C ei. This is a commutative associative algebra and the spectral norm is the supremum norm. Since u = Σ ςiei with |ςi| = 1, it follows that gu = Σ g(ςi)ei where |g(ςi)| = 1. So gu lies in S. The unitary structure group of EC carries D onto itself. This is a direct consequence of the definition of the spectral norm. The group of transformations SU(1,1)m corresponding to a Jordan frame carries D onto itself. This is already known for the Möbius transformations, i.e. the diagonal in SU(1,1)m. It follows for diagonal matrices in a fixed component in SU(1,1)m because they correspond to transformations in the unitary structure group. Conjugating by a Möbius transformation is equivalent to conjugation by a matrix in that component. Since the only non-trivial normal subgroup of SU(1,1) is its center, every matrix in a fixed component carries D onto itself. D is a bounded symmetric domain. Given an element in D an transformation in the identity component of the unitary structure group carries it in an element in ⊕ C ei with supremum norm less than 1. An transformation in SU(1,1)m the carries it onto zero. Thus there is a transitive group of biholomorphic transformations of D. The symmetry z ↦ −z is a biholomorphic Möbius transformation fixing only 0. The biholomorphic mappings of D onto itself that fix the origin are given by the unitary structure group. If f is a biholomorphic self-mapping of D with f(0) = 0 and derivative I at 0, then f must be the identity.[28] If not, f has Taylor series expansion f(z) = z + fk + fk + 1(z) + ⋅⋅⋅ with fi homogeneous of degree iand fk ≠ 0. But then fn(z) = z + n fk(z). Let ψ be a functional in A* of norm one. Then for fixed z in D, the holomorphic functions of a complex variable w given by hn(w) = ψ(fn(wz)) must have modulus less than 1 for |w| < 1. By Cauchy's inequality, the coefficients of wk must be uniformly bounded independent of n, which is not possible if fk ≠ 0. If g is a biholomorphic mapping of D onto itself just fixing 0 then if h(z) = eiα z, the mapping f = g ∘ h ∘ g−1 ∘ h−α fixes 0 and has derivative I there. It is therefore the identity map. So g(eiα z) = eiαg(z) for any α. This implies g is a linear mapping. Since it maps D onto itself it maps the closure onto itself. In particular it must map the Shilov boundary S onto itself. This forces g to be in the unitary structure group. The group GD of biholomorphic automorphisms of D is generated by the unitary structure group KD and the Möbius transformations associated to a Jordan frame. If AD denotes the subgroup of such Möbius transformations fixing ±1, then the Cartan decomposition formula holds: GD = KD AD KD. The orbit of 0 under AD is the set of all points Σ αi ei with −1 < αi < 1. The orbit of these points under the unitary structure group is the whole of D. The Cartan decomposition follows because KD is the stabilizer of 0 in GD. The center of GD is trivial. In fact the only point fixed by (the identity component of) KD in D is 0. Uniqueness implies that the center of GD must fix 0. It follows that the center of GD lies in KD. The center of KD is isomorphic to the circle group: a rotation through θ corresponds to multiplication by eiθ on D so lies in SU(1,1)/{±1}. Since this group has trivial center, the center of GD is trivial.[29] KD is a maximal compact subgroup of GD. In fact any larger compact subgroup would intersect AD non-trivially and it has no non-trivial compact subgroups. Note that GD is a Lie group (see below), so that the above three statements hold with GD and KD replaced by their identity components, i.e. the subgroups generated by their one-parameter cubgroups. Uniqueness of the maximal compact subgroup up to conjugacy follows from a general argument or can be deduced for classical domains directly using Sylvester's law of inertia following Sugiura (1982).[30] For the example of Hermitian matrices over C, this reduces to proving that U(n) × U(n) is up to conjugacy the unique maximal compact subgroup in U(n,n). In fact if W = Cn ⊕ (0), then U(n) × U(n) is the subgroup of U(n,n) preserving W. The restriction of the hermitian form given by the inner product on W minus the inner product on (0) ⊕ Cn. On the other hand, if K is a compact subgroup of U(n,n), there is a K-invariant inner product on C2n obtained by averaging any inner product with respect to Haar measure on K. The Hermitian form corresponds to an orthogonal decomposition into two subspaces of dimension n both invariant under K with the form positive definite on one and negative definite on the other. By Sylvester's law of inertia, given two subspaces of dimension n on which the Hermitian form is positive definite, one is carried onto the other by an element of U(n,n). Hence there is an element g of U(n,n) such that the positive definite subspace is given by gW. So gKg−1 leaves W invariant and gKg−1 ⊆ U(n) × U(n). A similar argument, with quaternions replacing the complex numbers, shows uniqueness for the symplectic group, which corresponds to Hermitian matrices over R. This can also be seen more directly by using complex structures. A complex structure is an invertible operator J with J2 = −I preserving the symplectic form B and such that −B(Jx,y) is a real inner product. The symplectic group acts transitively on complex structures by conjugation. Moreover, the subgroup commuting with J is naturally identified with the unitary group for the corresponding complex inner product space. Uniqueness follows by showing that any compact subgroup K commutes with some complex structure J. In fact, averaging over Haar measure, there is a K-invariant inner product on the underlying space. The symplectic form yields an invertible skew-adjoint operator T commuting with K. The operator S = −T2 is positive, so has a unique positive square root, which commutes with K. So J = S−1/2T, the phase of T, has square −I and commutes with K. Automorphism group of tube domain There is a Cartan decomposition for GT corresponding to the action on the tube T = E + iC: $\displaystyle {G_{T}=K_{T}A_{T}K_{T}.}$ • KT is the stabilizer of i in iC ⊂ T, so a maximal compact subgroup of GT. Under the Cayley transform, KT corresponds to KD, the stabilizer of 0 in the bounded symmetric domain, where it acts linearly. Since GT is semisimple, every maximal compact subgroup is conjugate to KT. • The center of GT or GD is trivial. In fact the only point fixed by KD in D is 0. Uniqueness implies that the center of GD must fix 0. It follows that the center of GD lies in KD and hence that the center of GT lies in KT. The center of KD is isomorphic to the circle group: a rotation through θ corresponds to multiplication by eiθ on D. In Cayley transform it corresponds to the Möbius transformation z ↦ (cz + s)(−sz + c)−1 where c = cos θ/2 and s = sin θ/2. (In particular, when θ = π, this gives the symmetry j(z) = −z−1.) In fact all Möbius transformations z ↦ (αz + β)(−γz + δ)−1 with αδ − βγ = 1 lie in GT. Since PSL(2,R) has trivial center, the center of GT is trivial.[31] • AT is given by the linear operators Q(a) with a = Σ αi ei with αi > 0. In fact the Cartan decomposition for GT follows from the decomposition for GD. Given z in D, there is an element u in KD, the identity component of Γu(EC), such that z = u Σ αjej with αj ≥ 0. Since ||z|| < 1, it follows that αj < 1. Taking the Cayley transform of z, it follows that every w in T can be written w = k∘ C Σ αjej, with C the Cayley transform and k in KT. Since C Σ αiei = Σ βjej i with βj = (1 + αj)(1 − αj)−1, the point w is of the form w =ka(i) with a in A. Hence GT = KTATKT. Iwasawa decomposition There is an Iwasawa decomposition for GT corresponding to the action on the tube T = E + iC:[32] $\displaystyle {G_{T}=K_{T}A_{T}N_{T}.}$ • KT is the stabilizer of i in iC ⊂ T. • AT is given by the linear operators Q(a) where a = Σ αi ei with αi > 0. • NT is a lower unitriangular group on EC. It is the semidirect product of the unipotent triangular group N appearing in the Iwasawa decomposition of G (the symmetry group of C) and N0 = E, group of translations x ↦ x + b. The group S = AN acts on E linearly and conjugation on N0 reproduces this action. Since the group S acts simply transitively on C, it follows that ANT=S⋅N0 acts simply transitively on T = E + iC. Let HT be the group of biholomorphisms of the tube T. The Cayley transform shows that is isomorphic to the group HD of biholomorphisms of the bounded domain D. Since ANT acts simply transitively on the tube T while KT fixes ic, they have trivial intersection. Given g in HT, take s in ANT such that g−1(i)=s−1(i). then gs−1 fixes i and therefore lies in KT. Hence HT = KT ⋅A⋅NT. So the product is a group. Lie group structure By a result of Henri Cartan, HD is a Lie group. Cartan's original proof is presented in Narasimhan (1971). It can also be deduced from the fact the D is complete for the Bergman metric, for which the isometries form a Lie group; by Montel's theorem, the group of biholomorphisms is a closed subgroup.[33] That HT is a Lie group can be seen directly in this case. In fact there is a finite-dimensional 3-graded Lie algebra ${\mathfrak {g}}_{T}$ of vector fields with an involution σ. The Killing form is negative definite on the +1 eigenspace of σ and positive definite on the −1 eigenspace. As a group HT normalizes ${\mathfrak {g}}_{T}$ since the two subgroups KT and ANT do. The +1 eigenspace corresponds to the Lie algebra of KT. Similarly the Lie algebras of the linear group AN and the affine group N0 lie in ${\mathfrak {g}}_{T}$. Since the group GT has trivial center, the map into GL(${\mathfrak {g}}_{T}$) is injective. Since KT is compact, its image in GL(${\mathfrak {g}}_{T}$) is compact. Since the Lie algebra ${\mathfrak {g}}_{T}$ is compatible with that of ANT, the image of ANT is closed. Hence the image of the product is closed, since the image of KT is compact. Since it is a closed subgroup, it follows that HT is a Lie group. Generalizations Euclidean Jordan algebras can be used to construct Hermitian symmetric spaces of tube type. The remaining Hermitian symmetric spaces are Siegel domains of the second kind. They can be constructed using Euclidean Jordan triple systems, a generalization of Euclidean Jordan algebras. In fact for a Euclidean Jordan algebra E let $\displaystyle {L(a,b)=2([L(a),L(b)]+L(ab)).}$ Then L(a,b) gives a bilinear map into End E such that $\displaystyle {L(a,b)^{*}=L(b,a)},\,\,\,L(a,b)c=L(c,b)a$ and $\displaystyle {[L(a,b),L(c,d)]=L(L(a,b)c,d)-L(c,L(b,a)d).}$ Any such bilinear system is called a Euclidean Jordan triple system. By definition the operators L(a,b) form a Lie subalgebra of End E. The Kantor–Koecher–Tits construction gives a one-one correspondence between Jordan triple systems and 3-graded Lie algebras $\displaystyle {{\mathfrak {g}}={\mathfrak {g}}_{-1}\oplus {\mathfrak {g}}_{0}\oplus {\mathfrak {g}}_{1},}$ satisfying $\displaystyle {[{\mathfrak {g}}_{p},{\mathfrak {g}}_{q}]\subseteq {\mathfrak {g}}_{p+q}}$ and equipped with an involutive automorphism σ reversing the grading. In this case $\displaystyle {L(a,b)=\mathrm {ad} \,[a,\sigma (b)]}$ defines a Jordan triple system on ${\mathfrak {g}}_{-1}$. In the case of Euclidean Jordan algebras or triple systems the Kantor–Koecher–Tits construction can be identified with the Lie algebra of the Lie group of all homomorphic automorphisms of the corresponding bounded symmetric domain. The Lie algebra is constructed by taking ${\mathfrak {g}}_{0}$ to be the Lie subalgebra ${\mathfrak {h}}$ of End E generated by the L(a,b) and ${\mathfrak {g}}_{\pm 1}$ to be copies of E. The Lie bracket is given by $\displaystyle {[(a_{1},T_{1},b_{1}),(a_{2},T_{2},b_{2})]=(T_{1}a_{2}-T_{2}a_{1},[T_{1},T_{2}]+L(a_{1},b_{2})-L(a_{2},b_{1}),T_{2}^{*}b_{1}-T_{1}^{*}b_{2})}$ and the involution by $\displaystyle {\sigma (a,T,b)=(b,-T^{*},a).}$ The Killing form is given by $\displaystyle {B((a_{1},T_{1},b_{1}),(a_{2},T_{2},b_{2}))=(a_{1},b_{2})+(b_{1},a_{2})+\beta (T_{1},T_{2}),}$ where β(T1,T2) is the symmetric bilinear form defined by $\displaystyle {\beta (L(a,b),L(c,d))=(L(a,b)c,d)=(L(c,d)a,b).}$ These formulas, originally derived for Jordan algebras, work equally well for Jordan triple systems.[34] The account in Koecher (1969) develops the theory of bounded symmetric domains starting from the standpoint of 3-graded Lie algebras. For a given finite-dimensional vector space E, Koecher considers finite-dimensional Lie algebras ${\mathfrak {g}}$ of vector fields on E with polynomial coefficients of degree ≤ 2. ${\mathfrak {g}}_{-1}$ consists of the constant vector fields ∂i and ${\mathfrak {g}}_{0}$ must contain the Euler operator H = Σ xi⋅∂i as a central element. Requiring the existence of an involution σ leads directly to a Jordan triple structure on V as above. As for all Jordan triple structures, fixing c in E, the operators Lc(a) = L(a,c) give E a Jordan algebra structure, determined by e. The operators L(a,b) themselves come from a Jordan algebra structure as above if and only if there are additional operators E± in ${\mathfrak {g}}_{\pm 1}$ so that H, E± give a copy of ${\mathfrak {sl}}_{2}$. The corresponding Weyl group element implements the involution σ. This case corresponds to that of Euclidean Jordan algebras. The remaining cases are constructed uniformly by Koecher using involutions of simple Euclidean Jordan algebras.[35] Let E be a simple Euclidean Jordan algebra and τ a Jordan algebra automorphism of E of period 2. Thus E = E+1 ⊕ E−1 has an eigenspace decomposition for τ with E+1 a Jordan subalgebra and E−1 a module. Moreover, a product of two elements in E−1 lies in E+1. For a, b, c in E−1, set $\displaystyle {L_{0}(a,b)c=L(a,b)c}$ and (a,b)= Tr L(ab). Then F = E−1 is a simple Euclidean Jordan triple system, obtained by restricting the triple system on E to F. Koecher exhibits explicit involutions of simple Euclidean Jordan algebras directly (see below). These Jordan triple systems correspond to irreducible Hermitian symmetric spaces given by Siegel domains of the second kind. In Cartan's listing, their compact duals are SU(p + q)/S(U(p) × U(q)) with p ≠ q (AIII), SO(2n)/U(n) with n odd (DIII) and E6/SO(10) × U(1) (EIII). Examples • F is the space of p by q matrices over R with p ≠ q. In this case L(a,b)c= abtc + cbta with inner product (a,b) = Tr abt. This is Koecher's construction for the involution on E = Hp + q(R) given by conjugating by the diagonal matrix with p digonal entries equal to 1 and q to −1. • F is the space of real skew-symmetric m by m matrices. In this case L(a,b)c = abc + cba with inner product (a,b) = −Tr ab. After removing a factor of √(-1), this is Koecher's construction applied to complex conjugation on E = Hn(C). • F is the direct sum of two copies of the Cayley numbers, regarded as 1 by 2 matrices. This triple system is obtained by Koecher's construction for the canonical involution defined by any minimal idempotent in E = H3(O). The classification of Euclidean Jordan triple systems has been achieved by generalizing the methods of Jordan, von Neumann and Wigner, but the proofs are more involved.[36] Prior differential geometric methods of Kobayashi & Nagano (1964), invoking a 3-graded Lie algebra, and of Loos (1971), Loos (1985) lead to a more rapid classification. Notes 1. This article uses as its main sources Jordan, von Neumann & Wigner (1934), Koecher (1999) and Faraut & Koranyi (1994), adopting the terminology and some simplifications from the latter. 2. Faraut & Koranyi 1994, pp. 2–4 3. For a proof of equivalence see: • Koecher 1999, p. 118, Theorem 12 • Faraut & Koranyi 1994, pp. 42, 153–154 4. See: • Freudenthal 1985 • Postnikov 1986 • Faraut & Koranyi 1994 • Springer & Veldkamp 2000 5. See: • Freudenthal 1985 • Jacobson 1968 • Zhevlakov et al. 1982 • Hanche-Olsen & Størmer 1984 • Springer & Veldkamp 2000, pp. 117–141 6. See: • Hanche-Olsen & Størmer 1984, pp. 58–59 • Faraut & Koranyi 1994, pp. 74–75 • Jacobson 1968 • Clerc 1992, pp. 49–52 7. Clerc 1992, pp. 49–52 8. Faraut & Koranyi 1994, pp. 46–49 9. Faraut & Koranyi 1994, pp. 32–35 10. See: • Koecher 1999, pp. 72–76 • Faraut & Koranyi 1994, pp. 32–34 11. See: • Jacobson 1968, pp. 40–47, 52 • Hanche-Olsen & Størmer 1984, pp. 36–44 12. See: • Koecher 1999, p. 111 • Hanche-Olsen & Størmer 1984, p. 83 • Faraut & Koranyi 1994, p. 48 13. Faraut & Koranyi 1994, pp. 49–50 14. Faraut & Koranyi 1994, pp. 145–146 15. Loos 1977, p. 3.15-3.16 16. Wright 1977, pp. 296–297 17. See Faraut & Koranyi (1994, pp. 73, 202–203) and Rudin (1973, pp. 270–273). By finite-dimensionality, every point in the convex span of S is the convex combination of n + 1 points, where n = 2 dim E. So the convex span of S is already compact and equals the closed unit ball. 18. Wright 1977, pp. 296–297 19. Faraut & Koranyi 1994, pp. 154–158 20. See: • Koecher 1999 • Koecher 1969 21. See: • Loos 1977 • Faraut & Koranyi 1994 22. Lang 1985, pp. 209–210 23. Bourbaki 1981, pp. 30–32 24. See: • Koecher 1999 • Faraut & Koranyi 1994, pp. 150–153 25. Loos 1977, pp. 9.4–9.5 26. Folland 1989, pp. 203–204 27. See: • Koecher 1999 • Faraut & Koranyi 1994, pp. 200–201 28. Faraut & Koranyi 1994, pp. 204–205 29. Faraut & Koranyi 1994, p. 208 30. Note that the elementary argument in Igusa (1972, p. 23) cited in Folland (1989) is incomplete. 31. Faraut & Koranyi 1994, p. 208 32. Faraut & Koranyi 1994, p. 334 33. See: • Cartan 1935 • Helgason 1978 • Kobayashi & Nomizu 1963 • Faraut & Koranyi 1994 34. See: • Koecher 1967 • Koecher 1968 • Koecher 1969 • Faraut & Koranyi 1994, pp. 218–219 35. Koecher 1969, p. 85 36. See: • Loos 1977 • Neher 1979 • Neher 1980 • Neher 1981 • Neher 1987 References • Albert, A. A. (1934), "On a certain algebra of quantum mechanics", Annals of Mathematics, 35 (1): 65–73, doi:10.2307/1968118, JSTOR 1968118 • Bourbaki, N. (1981), Groupes et Algèbres de Lie (Chapitres 4,5 et 6), Éléments de Mathématique, Masson, ISBN 978-2225760761 • Cartan, Henri (1935), Sur les groupes de transformations analytiques, Actualités scientifiques et industrielles, Hermann • Clerc, J. (1992), "Représentation d'une algèbre de Jordan, polynômes invariants et harmoniques de Stiefel", J. Reine Angew. Math., 1992 (423): 47–71, doi:10.1515/crll.1992.423.47 • Faraut, J.; Koranyi, A. (1994), Analysis on symmetric cones, Oxford Mathematical Monographs, Oxford University Press, ISBN 978-0198534778 • Folland, G. B. (1989), Harmonic analysis in phase space, Annals of Mathematics Studies, vol. 122, Princeton University Press, ISBN 9780691085289 • Freudenthal, Hans (1951), Oktaven, Ausnahmegruppen und Oktavengeometrie, Mathematisch Instituut der Rijksuniversiteit te Utrecht • Freudenthal, Hans (1985), "Oktaven, Ausnahmegruppen und Oktavengeometrie", Geom. Dedicata, 19: 7–63, doi:10.1007/bf00233101 (reprint of 1951 article) • Hanche-Olsen, Harald; Størmer, Erling (1984), Jordan operator algebras, Monographs and Studies in Mathematics, vol. 21, Pitman, ISBN 978-0273086192 • Helgason, Sigurdur (1978), Differential Geometry, Lie Groups, and Symmetric Spaces, Academic Press, New York, ISBN 978-0-12-338460-7 • Igusa, J. (1972), Theta functions, Die Grundlehren der mathematischen Wissenschaften, vol. 194, Springer-Verlag • Jacobson, N. (1968), Structure and representations of Jordan algebras, American Mathematical Society Colloquium Publications, vol. 39, American Mathematical Society • Jordan, P.; von Neumann, J.; Wigner, E. (1934), "On an algebraic generalization of the quantum mechanical formalism", Annals of Mathematics, 35 (1): 29–64, doi:10.2307/1968117, JSTOR 1968117 • Kobayashi, Shoshichi; Nomizu, Katsumi (1963), Foundations of Differential Geometry, Vol. I, Wiley Interscience, ISBN 978-0-470-49648-0 • Kobayashi, Shoshichi; Nagano, Tadashi (1964), "On filtered Lie algebras and geometric structures. I.", J. Math. Mech., 13: 875–907 • Koecher, M. (1967), "Imbedding of Jordan algebras into Lie algebras. I", Amer. J. Math., 89 (3): 787–816, doi:10.2307/2373242, JSTOR 2373242 • Koecher, M. (1968), "Imbedding of Jordan algebras into Lie algebras. II", Amer. J. Math., 90 (2): 476–510, doi:10.2307/2373540, JSTOR 2373540 • Koecher, M. (1969), An elementary approach to bounded symmetric domains, Lecture Notes, Rice University • Koecher, M. (1999), The Minnesota Notes on Jordan Algebras and Their Applications, Lecture Notes in Mathematics, vol. 1710, Springer, ISBN 978-3540663607 • Koecher, M. (1971), "Jordan algebras and differential geometry" (PDF), Actes du Congrès International des Mathématiciens (Nice, 1970), Tome I, Gauthier-Villars, pp. 279–283 • Lang, S. (1985), SL2(R), Graduate Texts in Mathematics, vol. 105, Springer-Verlag, ISBN 978-0-387-96198-9 • Loos, Ottmar (1975), Jordan pairs, Lecture Notes in Mathematics, vol. 460, Springer-Verlag • Loos, Ottmar (1971), "A structure theory of Jordan pairs", Bull. Amer. Math. Soc., 80: 67–71, doi:10.1090/s0002-9904-1974-13355-0 • Loos, Ottmar (1977), Bounded symmetric domains and Jordan pairs (PDF), Mathematical lectures, University of California, Irvine, archived from the original (PDF) on 2016-03-03 • Loos, Ottmar (1985), "Charakterisierung symmetrischer R-Räume durch ihre Einheitsgitter", Math. Z., 189 (2): 211–226, doi:10.1007/bf01175045 • Macdonald, I. G. (1960), "Jordan algebras with three generators", Proc. London Math. Soc., 10: 395–408, doi:10.1112/plms/s3-10.1.395 • Narasimhan, Raghavan (1971), Several complex variables, Chicago Lectures in Mathematics, University of Chicago Press, ISBN 978-0-226-56817-1 • Neher, Erhard (1979), "Cartan-Involutionen von halbeinfachen reellen Jordan-Tripelsystemen", Math. Z., 169 (3): 271–292, doi:10.1007/bf01214841 • Neher, Erhard (1980), "Klassifikation der einfachen reellen speziellen Jordan-Tripelsysteme", Manuscripta Math., 31 (1–3): 197–215, doi:10.1007/bf01303274 • Neher, Erhard (1981), "Klassifikation der einfachen reellen Ausnahme-Jordan-Tripelsysteme", J. Reine Angew. Math., 1981 (322): 145–169, doi:10.1515/crll.1981.322.145 • Neher, Erhard (1987), Jordan triple systems by the grid approach, Lecture Notes in Mathematics, vol. 1280, Springer-Verlag, ISBN 978-3540183624 • Postnikov, M. (1986), Lie groups and Lie algebras. Lectures in geometry. Semester V, Mir • Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259. • Springer, T. A.; Veldkamp, F. D. (2000), Octonions, Jordan Algebras and Exceptional Groups, Springer-Verlag, ISBN 978-3540663379, originally lecture notes from a course given in the University of Göttingen in 1962 • Sugiura, Mitsuo (1982), "The conjugacy of maximal compact subgroups for orthogonal, unitary and unitary symplectic groups", Sci. Papers College Gen. Ed. Univ. Tokyo, 32: 101–108 • Wright, J. D. M. (1977), "Jordan C∗-algebras", Michigan Math. J., 24 (3): 291–302, doi:10.1307/mmj/1029001946 • Zhevlakov, K. A.; Slinko, A. M.; Shestakov, I. P.; Shirshov, A. I. (1982), Rings that are nearly associative, Pure and Applied Mathematics, vol. 104, Academic Press, ISBN 978-0127798509
Wikipedia
Canonical map In mathematics, a canonical map, also called a natural map, is a map or morphism between objects that arises naturally from the definition or the construction of the objects. Often, it is a map which preserves the widest amount of structure. A choice of a canonical map sometimes depends on a convention (e.g., a sign convention). See also Natural transformation, a related concept in category theory. For the canonical map of an algebraic variety into projective space, see Canonical bundle § Canonical maps. A closely related notion is a structure map or structure morphism; the map or morphism that comes with the given structure on the object. These are also sometimes called canonical maps. A canonical isomorphism is a canonical map that is also an isomorphism (i.e., invertible). In some contexts, it might be necessary to address an issue of choices of canonical maps or canonical isomorphisms; for a typical example, see prestack. For a discussion of the problem of defining a canonical map see Kevin Buzzard's talk at the 2022 Grothendieck conference.[1] Examples • If N is a normal subgroup of a group G, then there is a canonical surjective group homomorphism from G to the quotient group G/N, that sends an element g to the coset determined by g. • If I is an ideal of a ring R, then there is a canonical surjective ring homomorphism from R onto the quotient ring R/I, that sends an element r to its coset I+r. • If V is a vector space, then there is a canonical map from V to the second dual space of V, that sends a vector v to the linear functional fv defined by fv(λ) = λ(v). • If f: R → S is a homomorphism between commutative rings, then S can be viewed as an algebra over R. The ring homomorphism f is then called the structure map (for the algebra structure). The corresponding map on the prime spectra f*: Spec(S) → Spec(R) is also called the structure map. • If E is a vector bundle over a topological space X, then the projection map from E to X is the structure map. • In topology, a canonical map is a function f mapping a set X → X/R (X modulo R), where R is an equivalence relation on X, that takes each x in X to the equivalence class [x] modulo R.[2] References 1. Buzzard, Kevin. "Grothendieck Conference Talk". 2. Vialar, Thierry (2016-12-07). Handbook of Mathematics. BoD - Books on Demand. p. 274. ISBN 9782955199008.
Wikipedia
Regular grid A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes (e.g. bricks).[1] Its opposite is irregular grid. Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences,[2] structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods. Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates $(i\cdot dx,j\cdot dy)$ in 2D or $(i\cdot dx,j\cdot dy,k\cdot dz)$ in 3D for some real numbers dx, dy, and dz representing the grid spacing. Related grids A Cartesian grid is a special case where the elements are unit squares or unit cubes, and the vertices are points on the integer lattice. A rectilinear grid is a tessellation by rectangles or rectangular cuboids (also known as rectangular parallelepipeds) that are not, in general, all congruent to each other. The cells may still be indexed by integers as above, but the mapping from indexes to vertex coordinates is less uniform than in a regular grid. An example of a rectilinear grid that is not regular appears on logarithmic scale graph paper. A skewed grid is a tessellation of parallelograms or parallelepipeds. (If the unit lengths are all equal, it is a tessellation of rhombi or rhombohedra.) A curvilinear grid or structured grid is a grid with the same combinatorial structure as a regular grid, in which the cells are quadrilaterals or [general] cuboids, rather than rectangles or rectangular cuboids. Examples of various grids • 3-D Cartesian grid • 3-D rectilinear grid • 2-D curvilinear grid • Non-curvilinear combination of different 2-D curvilinear grids • 2-D triangular grid. See also • Cartesian coordinate system – Most common coordinate system (geometry) • Integer lattice – Lattice group in Euclidean space whose points are integer n-tuples • Unstructured grid – Unstructured (or irregular) grid is a tessellation of a part of the Euclidean plane • Discretization – Process of transferring continuous functions into discrete counterparts References 1. Uznanski, Dan. "Grid". From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. Retrieved 25 March 2012. 2. J.F. Thompson, B. K . Soni & N.P. Weatherill (1998). Handbook of Grid Generation. CRC-Press. ISBN 978-0-8493-2687-5. Tessellation Periodic • Pythagorean • Rhombille • Schwarz triangle • Rectangle • Domino • Uniform tiling and honeycomb • Coloring • Convex • Kisrhombille • Wallpaper group • Wythoff Aperiodic • Ammann–Beenker • Aperiodic set of prototiles • List • Einstein problem • Socolar–Taylor • Gilbert • Penrose • Pentagonal • Pinwheel • Quaquaversal • Rep-tile and Self-tiling • Sphinx • Socolar • Truchet Other • Anisohedral and Isohedral • Architectonic and catoptric • Circle Limit III • Computer graphics • Honeycomb • Isotoxal • List • Packing • Problems • Domino • Wang • Heesch's • Squaring • Dividing a square into similar rectangles • Prototile • Conway criterion • Girih • Regular Division of the Plane • Regular grid • Substitution • Voronoi • Voderberg By vertex type Spherical • 2n • 33.n • V33.n • 42.n • V42.n Regular • 2∞ • 36 • 44 • 63 Semi- regular • 32.4.3.4 • V32.4.3.4 • 33.42 • 33.∞ • 34.6 • V34.6 • 3.4.6.4 • (3.6)2 • 3.122 • 42.∞ • 4.6.12 • 4.82 Hyper- bolic • 32.4.3.5 • 32.4.3.6 • 32.4.3.7 • 32.4.3.8 • 32.4.3.∞ • 32.5.3.5 • 32.5.3.6 • 32.6.3.6 • 32.6.3.8 • 32.7.3.7 • 32.8.3.8 • 33.4.3.4 • 32.∞.3.∞ • 34.7 • 34.8 • 34.∞ • 35.4 • 37 • 38 • 3∞ • (3.4)3 • (3.4)4 • 3.4.62.4 • 3.4.7.4 • 3.4.8.4 • 3.4.∞.4 • 3.6.4.6 • (3.7)2 • (3.8)2 • 3.142 • 3.162 • (3.∞)2 • 3.∞2 • 42.5.4 • 42.6.4 • 42.7.4 • 42.8.4 • 42.∞.4 • 45 • 46 • 47 • 48 • 4∞ • (4.5)2 • (4.6)2 • 4.6.12 • 4.6.14 • V4.6.14 • 4.6.16 • V4.6.16 • 4.6.∞ • (4.7)2 • (4.8)2 • 4.8.10 • V4.8.10 • 4.8.12 • 4.8.14 • 4.8.16 • 4.8.∞ • 4.102 • 4.10.12 • 4.122 • 4.12.16 • 4.142 • 4.162 • 4.∞2 • (4.∞)2 • 54 • 55 • 56 • 5∞ • 5.4.6.4 • (5.6)2 • 5.82 • 5.102 • 5.122 • (5.∞)2 • 64 • 65 • 66 • 68 • 6.4.8.4 • (6.8)2 • 6.82 • 6.102 • 6.122 • 6.162 • 73 • 74 • 77 • 7.62 • 7.82 • 7.142 • 83 • 84 • 86 • 88 • 8.62 • 8.122 • 8.162 • ∞3 • ∞4 • ∞5 • ∞∞ • ∞.62 • ∞.82
Wikipedia
Struve function In mathematics, the Struve functions Hα(x), are solutions y(x) of the non-homogeneous Bessel's differential equation: $x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y={\frac {4\left({\frac {x}{2}}\right)^{\alpha +1}}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}$ introduced by Hermann Struve (1882). The complex number α is the order of the Struve function, and is often an integer. And further defined its second-kind version $\mathbf {K} _{\alpha }(x)$ as $\mathbf {K} _{\alpha }(x)=\mathbf {H} _{\alpha }(x)-Y_{\alpha }(x)$. The modified Struve functions Lα(x) are equal to −ie−iαπ / 2Hα(ix), are solutions y(x) of the non-homogeneous Bessel's differential equation: $x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y={\frac {4\left({\frac {x}{2}}\right)^{\alpha +1}}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}$ And further defined its second-kind version $\mathbf {M} _{\alpha }(x)$ as $\mathbf {M} _{\alpha }(x)=\mathbf {L} _{\alpha }(x)-I_{\alpha }(x)$. Definitions Since this is a non-homogeneous equation, solutions can be constructed from a single particular solution by adding the solutions of the homogeneous problem. In this case, the homogeneous solutions are the Bessel functions, and the particular solution may be chosen as the corresponding Struve function. Power series expansion Struve functions, denoted as Hα(z) have the power series form $\mathbf {H} _{\alpha }(z)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{\Gamma \left(m+{\frac {3}{2}}\right)\Gamma \left(m+\alpha +{\frac {3}{2}}\right)}}\left({\frac {z}{2}}\right)^{2m+\alpha +1},$ where Γ(z) is the gamma function. The modified Struve functions, denoted Lα(z), have the following power series form $\mathbf {L} _{\alpha }(z)=\sum _{m=0}^{\infty }{\frac {1}{\Gamma \left(m+{\frac {3}{2}}\right)\Gamma \left(m+\alpha +{\frac {3}{2}}\right)}}\left({\frac {z}{2}}\right)^{2m+\alpha +1}.$ Integral form Another definition of the Struve function, for values of α satisfying Re(α) > − 1/2, is possible expressing in term of the Poisson's integral representation: $\mathbf {H} _{\alpha }(x)={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{1}(1-t^{2})^{\alpha -{\frac {1}{2}}}\sin xt~dt={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}\sin(x\cos \tau )\sin ^{2\alpha }\tau ~d\tau ={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}\sin(x\sin \tau )\cos ^{2\alpha }\tau ~d\tau $ $\mathbf {K} _{\alpha }(x)={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\infty }(1+t^{2})^{\alpha -{\frac {1}{2}}}e^{-xt}~dt={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\infty }e^{-x\sinh \tau }\cosh ^{2\alpha }\tau ~d\tau $ $\mathbf {L} _{\alpha }(x)={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{1}(1-t^{2})^{\alpha -{\frac {1}{2}}}\sinh xt~dt={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}\sinh(x\cos \tau )\sin ^{2\alpha }\tau ~d\tau ={\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}\sinh(x\sin \tau )\cos ^{2\alpha }\tau ~d\tau $ $\mathbf {M} _{\alpha }(x)=-{\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{1}(1-t^{2})^{\alpha -{\frac {1}{2}}}e^{-xt}~dt=-{\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}e^{-x\cos \tau }\sin ^{2\alpha }\tau ~d\tau =-{\frac {2\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}\int _{0}^{\frac {\pi }{2}}e^{-x\sin \tau }\cos ^{2\alpha }\tau ~d\tau $ Asymptotic forms For small x, the power series expansion is given above. For large x, one obtains: $\mathbf {H} _{\alpha }(x)-Y_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha -1}}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {1}{2}}\right)}}+O\left(\left({\tfrac {x}{2}}\right)^{\alpha -3}\right),$ where Yα(x) is the Neumann function. Properties The Struve functions satisfy the following recurrence relations: ${\begin{aligned}\mathbf {H} _{\alpha -1}(x)+\mathbf {H} _{\alpha +1}(x)&={\frac {2\alpha }{x}}\mathbf {H} _{\alpha }(x)+{\frac {\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {3}{2}}\right)}},\\\mathbf {H} _{\alpha -1}(x)-\mathbf {H} _{\alpha +1}(x)&=2{\frac {d}{dx}}\left(\mathbf {H} _{\alpha }(x)\right)-{\frac {\left({\frac {x}{2}}\right)^{\alpha }}{{\sqrt {\pi }}\Gamma \left(\alpha +{\frac {3}{2}}\right)}}.\end{aligned}}$ Relation to other functions Struve functions of integer order can be expressed in terms of Weber functions En and vice versa: if n is a non-negative integer then ${\begin{aligned}\mathbf {E} _{n}(z)&={\frac {1}{\pi }}\sum _{k=0}^{\left\lfloor {\frac {n-1}{2}}\right\rfloor }{\frac {\Gamma \left(k+{\frac {1}{2}}\right)\left({\frac {z}{2}}\right)^{n-2k-1}}{\Gamma \left(n-k+{\frac {1}{2}}\right)}}-\mathbf {H} _{n}(z),\\\mathbf {E} _{-n}(z)&={\frac {(-1)^{n+1}}{\pi }}\sum _{k=0}^{\left\lfloor {\frac {n-1}{2}}\right\rfloor }{\frac {\Gamma (n-k-{\frac {1}{2}})\left({\frac {z}{2}}\right)^{-n+2k+1}}{\Gamma \left(k+{\frac {3}{2}}\right)}}-\mathbf {H} _{-n}(z).\end{aligned}}$ Struve functions of order n + 1/2 where n is an integer can be expressed in terms of elementary functions. In particular if n is a non-negative integer then $\mathbf {H} _{-n-{\frac {1}{2}}}(z)=(-1)^{n}J_{n+{\frac {1}{2}}}(z),$ where the right hand side is a spherical Bessel function. Struve functions (of any order) can be expressed in terms of the generalized hypergeometric function 1F2: $\mathbf {H} _{\alpha }(z)={\frac {z^{\alpha +1}}{2^{\alpha }{\sqrt {\pi }}\Gamma \left(\alpha +{\tfrac {3}{2}}\right)}}{}_{1}F_{2}\left(1;{\tfrac {3}{2}},\alpha +{\tfrac {3}{2}};-{\tfrac {z^{2}}{4}}\right).$ Applications The Struve and Weber functions were shown to have an application to beamforming in.[1], and in describing the effect of confining interface on Brownian motion of colloidal particles at low Reynolds numbers.[2] References 1. K. Buchanan, C. Flores, S. Wheeland, J. Jensen, D. Grayson and G. Huff, "Transmit beamforming for radar applications using circularly tapered random arrays," 2017 IEEE Radar Conference (RadarConf), 2017, pp. 0112-0117, doi: 10.1109/RADAR.2017.7944181 2. B. U. Felderhof, "Effect of the wall on the velocity autocorrelation function and long-time tail of Brownian motion." The Journal of Physical Chemistry B 109.45, 2005, pp. 21406-21412 • R. M. Aarts and Augustus J. E. M. Janssen (2003). "Approximation of the Struve function H1 occurring in impedance calculations". J. Acoust. Soc. Am. 113 (5): 2635–2637. Bibcode:2003ASAJ..113.2635A. doi:10.1121/1.1564019. PMID 12765381. • R. M. Aarts and Augustus J. E. M. Janssen (2016). "Efficient approximation of the Struve functions Hn occurring in the calculation of sound radiation quantities". J. Acoust. Soc. Am. 140 (6): 4154–4160. Bibcode:2016ASAJ..140.4154A. doi:10.1121/1.4968792. PMID 28040027. • Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 12". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 496. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. • Ivanov, A. B. (2001) [1994], "Struve function", Encyclopedia of Mathematics, EMS Press • Paris, R. B. (2010), "Struve function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. • Struve, H. (1882). "Beitrag zur Theorie der Diffraction an Fernröhren". Annalen der Physik und Chemie. 17 (13): 1008–1016. Bibcode:1882AnP...253.1008S. doi:10.1002/andp.18822531319. External links • Struve functions at the Wolfram functions site.
Wikipedia
Stuck unknot In mathematics, a stuck unknot is a closed polygonal chain in three-dimensional space (a skew polygon) that is topologically equal to the unknot but cannot be deformed to a simple polygon when interpreted as a mechanical linkage, by rigid length-preserving and non-self-intersecting motions of its segments.[1][2] Similarly a stuck open chain is an open polygonal chain such that the segments may not be aligned by moving rigidly its segments. Topologically such a chain can be unknotted, but the limitation of using only rigid motions of the segments can create nontrivial knots in such a chain. Consideration of such "stuck" configurations arises in the study of molecular chains in biochemistry. References 1. G. Aloupis, G. Ewald, and G. T. Toussaint, "More classes of stuck unknotted hexagons," Contributions to Algebra and Geometry, Vol. 45, No. 2, 2004, pp. 429–434. 2. G. T. Toussaint, "A new class of stuck unknots in Pol-6," Contributions to Algebra and Geometry, Vol. 42, No. 2, 2001, pp. 301–306.
Wikipedia
Stufe (algebra) In field theory, a branch of mathematics, the Stufe (/ʃtuːfə/; German: level) s(F) of a field F is the least number of squares that sum to −1. If −1 cannot be written as a sum of squares, s(F) = $\infty $. In this case, F is a formally real field. Albrecht Pfister proved that the Stufe, if finite, is always a power of 2, and that conversely every power of 2 occurs.[1] Powers of 2 If $s(F)\neq \infty $ then $s(F)=2^{k}$ for some natural number $k$.[1][2] Proof: Let $k\in \mathbb {N} $ be chosen such that $2^{k}\leq s(F)<2^{k+1}$. Let $n=2^{k}$. Then there are $s=s(F)$ elements $e_{1},\ldots ,e_{s}\in F\setminus \{0\}$ such that $0=\underbrace {1+e_{1}^{2}+\cdots +e_{n-1}^{2}} _{=:\,a}+\underbrace {e_{n}^{2}+\cdots +e_{s}^{2}} _{=:\,b}\;.$ Both $a$ and $b$ are sums of $n$ squares, and $a\neq 0$, since otherwise $s(F)<2^{k}$, contrary to the assumption on $k$. According to the theory of Pfister forms, the product $ab$ is itself a sum of $n$ squares, that is, $ab=c_{1}^{2}+\cdots +c_{n}^{2}$ for some $c_{i}\in F$. But since $a+b=0$, we also have $-a^{2}=ab$, and hence $-1={\frac {ab}{a^{2}}}=\left({\frac {c_{1}}{a}}\right)^{2}+\cdots +\left({\frac {c_{n}}{a}}\right)^{2},$ and thus $s(F)=n=2^{k}$. Positive characteristic Any field $F$ with positive characteristic has $s(F)\leq 2$.[3] Proof: Let $p=\operatorname {char} (F)$. It suffices to prove the claim for $\mathbb {F} _{p}$. If $p=2$ then $-1=1=1^{2}$, so $s(F)=1$. If $p>2$ consider the set $S=\{x^{2}:x\in \mathbb {F} _{p}\}$ of squares. $S\setminus \{0\}$ is a subgroup of index $2$ in the cyclic group $\mathbb {F} _{p}^{\times }$ with $p-1$ elements. Thus $S$ contains exactly ${\tfrac {p+1}{2}}$ elements, and so does $-1-S$. Since $\mathbb {F} _{p}$ only has $p$ elements in total, $S$ and $-1-S$ cannot be disjoint, that is, there are $x,y\in \mathbb {F} _{p}$ with $S\ni x^{2}=-1-y^{2}\in -1-S$ and thus $-1=x^{2}+y^{2}$. Properties The Stufe s(F) is related to the Pythagoras number p(F) by p(F) ≤ s(F) + 1.[4] If F is not formally real then s(F) ≤ p(F) ≤ s(F) + 1.[5][6] The additive order of the form (1), and hence the exponent of the Witt group of F is equal to 2s(F).[7][8] Examples • The Stufe of a quadratically closed field is 1.[8] • The Stufe of an algebraic number field is ∞, 1, 2 or 4 (Siegel's theorem).[9] Examples are Q, Q(√−1), Q(√−2) and Q(√−7).[7] • The Stufe of a finite field GF(q) is 1 if q ≡ 1 mod 4 and 2 if q ≡ 3 mod 4.[3][8][10] • The Stufe of a local field of odd residue characteristic is equal to that of its residue field. The Stufe of the 2-adic field Q2 is 4.[9] Notes 1. Rajwade (1993) p.13 2. Lam (2005) p.379 3. Rajwade (1993) p.33 4. Rajwade (1993) p.44 5. Rajwade (1993) p.228 6. Lam (2005) p.395 7. Milnor & Husemoller (1973) p.75 8. Lam (2005) p.380 9. Lam (2005) p.381 10. Singh, Sahib (1974). "Stufe of a finite field". Fibonacci Quarterly. 12: 81–82. ISSN 0015-0517. Zbl 0278.12008. References • Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. Zbl 1068.11023. • Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016. • Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series. Vol. 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022. Further reading • Knebusch, Manfred; Scharlau, Winfried (1980). Algebraic theory of quadratic forms. Generic methods and Pfister forms. DMV Seminar. Vol. 1. Notes taken by Heisook Lee. Boston - Basel - Stuttgart: Birkhäuser Verlag. ISBN 3-7643-1206-8. Zbl 0439.10011.
Wikipedia
Stunted projective space In mathematics, a stunted projective space is a construction on a projective space of importance in homotopy theory, introduced by James (1959). Part of a conventional projective space is collapsed down to a point. More concretely, in a real projective space, complex projective space or quaternionic projective space KPn, where K stands for the real numbers, complex numbers or quaternions, one can find (in many ways) copies of KPm, where m < n. The corresponding stunted projective space is then KPn,m = KPn/KPm, where the notation implies that the KPm has been identified to a point. This makes a topological space that is no longer a manifold. The importance of this construction was realised when it was shown that real stunted projective spaces arose as Spanier–Whitehead duals of spaces of Ioan James, so-called quasi-projective spaces, constructed from Stiefel manifolds. Their properties were therefore linked to the construction of frame fields on spheres. In this way the vector fields on spheres question was reduced to a question on stunted projective spaces: for RPn,m, is there a degree one mapping on the 'next cell up' (of the first dimension not collapsed in the 'stunting') that extends to the whole space? Frank Adams showed that this could not happen, completing the proof. In later developments spaces KP∞,m and stunted lens spaces have also been used. References • James, I. M. (1959), "Spaces associated with Stiefel manifolds", Proceedings of the London Mathematical Society, Third Series, 9: 115–140, doi:10.1112/plms/s3-9.1.115, ISSN 0024-6115, MR 0102810
Wikipedia
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial p is a sequence of polynomials associated with p and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of p located in an interval in terms of the number of changes of signs of the values of the Sturm sequence at the bounds of the interval. Applied to the interval of all the real numbers, it gives the total number of real roots of p.[1] Whereas the fundamental theorem of algebra readily yields the overall number of complex roots, counted with multiplicity, it does not provide a procedure for calculating them. Sturm's theorem counts the number of distinct real roots and locates them in intervals. By subdividing the intervals containing some roots, it can isolate the roots into arbitrarily small intervals, each containing exactly one root. This yields the oldest real-root isolation algorithm, and arbitrary-precision root-finding algorithm for univariate polynomials. For computing over the reals, Sturm's theorem is less efficient than other methods based on Descartes' rule of signs. However, it works on every real closed field, and, therefore, remains fundamental for the theoretical study of the computational complexity of decidability and quantifier elimination in the first order theory of real numbers. The Sturm sequence and Sturm's theorem are named after Jacques Charles François Sturm, who discovered the theorem in 1829.[2] The theorem The Sturm chain or Sturm sequence of a univariate polynomial P(x) with real coefficients is the sequence of polynomials $P_{0},P_{1},\ldots ,$ such that ${\begin{aligned}P_{0}&=P,\\P_{1}&=P',\\P_{i+1}&=-\operatorname {rem} (P_{i-1},P_{i}),\end{aligned}}$ for i ≥ 1, where P' is the derivative of P, and $\operatorname {rem} (P_{i-1},P_{i})$ is the remainder of the Euclidean division of $P_{i}$ by $P_{i-1}.$ The length of the Sturm sequence is at most the degree of P. The number of sign variations at ξ of the Sturm sequence of P is the number of sign changes–ignoring zeros—in the sequence of real numbers $P_{0}(\xi ),P_{1}(\xi ),P_{2}(\xi ),\ldots .$ This number of sign variations is denoted here V(ξ). Sturm's theorem states that, if P is a square-free polynomial, the number of distinct real roots of P in the half-open interval (a, b] is V(a) − V(b) (here, a and b are real numbers such that a < b).[1] The theorem extends to unbounded intervals by defining the sign at +∞ of a polynomial as the sign of its leading coefficient (that is, the coefficient of the term of highest degree). At –∞ the sign of a polynomial is the sign of its leading coefficient for a polynomial of even degree, and the opposite sign for a polynomial of odd degree. In the case of a non-square-free polynomial, if neither a nor b is a multiple root of p, then V(a) − V(b) is the number of distinct real roots of P. The proof of the theorem is as follows: when the value of x increases from a to b, it may pass through a zero of some $P_{i}$ (i > 0); when this occurs, the number of sign variations of $(P_{i-1},P_{i},P_{i+1})$ does not change. When x passes through a root of $P_{0}=P,$ the number of sign variations of $(P_{0},P_{1})$ decreases from 1 to 0. These are the only values of x where some sign may change. Example Suppose we wish to find the number of roots in some range for the polynomial $p(x)=x^{4}+x^{3}-x-1$. So ${\begin{aligned}p_{0}(x)&=p(x)=x^{4}+x^{3}-x-1\\p_{1}(x)&=p'(x)=4x^{3}+3x^{2}-1\end{aligned}}$ The remainder of the Euclidean division of p0 by p1 is $-{\tfrac {3}{16}}x^{2}-{\tfrac {3}{4}}x-{\tfrac {15}{16}};$ multiplying it by −1 we obtain $p_{2}(x)={\tfrac {3}{16}}x^{2}+{\tfrac {3}{4}}x+{\tfrac {15}{16}}$. Next dividing p1 by p2 and multiplying the remainder by −1, we obtain $p_{3}(x)=-32x-64$. Now dividing p2 by p3 and multiplying the remainder by −1, we obtain $p_{4}(x)=-{\tfrac {3}{16}}$. As this is a constant, this finishes the computation of the Sturm sequence. To find the number of real roots of $p_{0}$ one has to evaluate the sequences of the signs of these polynomials at −∞ and ∞, which are respectively (+, −, +, +, −) and (+, +, +, −, −). Thus $V(-\infty )-V(+\infty )=3-1=2,$ where V denotes the number of sign changes in the sequence, which shows that p has two real roots. This can be verified by noting that p(x) can be factored as (x2 − 1)(x2 + x + 1), where the first factor has the roots −1 and 1, and second factor has no real roots. This last assertion results from the quadratic formula, and also from Sturm's theorem, which gives the sign sequences (+, –, –) at −∞ and (+, +, –) at +∞. Generalization Sturm sequences have been generalized in two directions. To define each polynomial in the sequence, Sturm used the negative of the remainder of the Euclidean division of the two preceding ones. The theorem remains true if one replaces the negative of the remainder by its product or quotient by a positive constant or the square of a polynomial. It is also useful (see below) to consider sequences where the second polynomial is not the derivative of the first one. A generalized Sturm sequence is a finite sequence of polynomials with real coefficients $P_{0},P_{1},\dots ,P_{m}$ such that • the degrees are decreasing after the first one: $\deg P_{i}<\deg P_{i-1}$ for i = 2, ..., m; • $P_{m}$ does not have any real root or has no sign changes near its real roots. • if Pi(ξ) = 0 for 0 < i < m and ξ a real number, then Pi −1 (ξ) Pi + 1(ξ) < 0. The last condition implies that two consecutive polynomials do not have any common real root. In particular the original Sturm sequence is a generalized Sturm sequence, if (and only if) the polynomial has no multiple real root (otherwise the first two polynomials of its Sturm sequence have a common root). When computing the original Sturm sequence by Euclidean division, it may happen that one encounters a polynomial that has a factor that is never negative, such a $x^{2}$ or $x^{2}+1$. In this case, if one continues the computation with the polynomial replaced by its quotient by the nonnegative factor, one gets a generalized Sturm sequence, which may also be used for computing the number of real roots, since the proof of Sturm's theorem still applies (because of the third condition). This may sometimes simplify the computation, although it is generally difficult to find such nonnegative factors, except for even powers of x. Use of pseudo-remainder sequences In computer algebra, the polynomials that are considered have integer coefficients or may be transformed to have integer coefficients. The Sturm sequence of a polynomial with integer coefficients generally contains polynomials whose coefficients are not integers (see above example). To avoid computation with rational numbers, a common method is to replace Euclidean division by pseudo-division for computing polynomial greatest common divisors. This amounts to replacing the remainder sequence of the Euclidean algorithm by a pseudo-remainder sequence, a pseudo remainder sequence being a sequence $p_{0},\ldots ,p_{k}$ of polynomials such that there are constants $a_{i}$ and $b_{i}$ such that $b_{i}p_{i+1}$ is the remainder of the Euclidean division of $a_{i}p_{i-1}$ by $p_{i}.$ (The different kinds of pseudo-remainder sequences are defined by the choice of $a_{i}$ and $b_{i};$ typically, $a_{i}$ is chosen for not introducing denominators during Euclidean division, and $b_{i}$ is a common divisor of the coefficients of the resulting remainder; see Pseudo-remainder sequence for details.) For example, the remainder sequence of the Euclidean algorithm is a pseudo-remainder sequence with $a_{i}=b_{i}=1$ for every i, and the Sturm sequence of a polynomial is a pseudo-remainder sequence with $a_{i}=1$ and $b_{i}=-1$ for every i. Various pseudo-remainder sequences have been designed for computing greatest common divisors of polynomials with integer coefficients without introducing denominators (see Pseudo-remainder sequence). They can all be made generalized Sturm sequences by choosing the sign of the $b_{i}$ to be the opposite of the sign of the $a_{i}.$ This allows the use of Sturm's theorem with pseudo-remainder sequences. Root isolation For a polynomial with real coefficients, root isolation consists of finding, for each real root, an interval that contains this root, and no other roots. This is useful for root finding, allowing the selection of the root to be found and providing a good starting point for fast numerical algorithms such as Newton's method; it is also useful for certifying the result, as if Newton's method converge outside the interval one may immediately deduce that it converges to the wrong root. Root isolation is also useful for computing with algebraic numbers. For computing with algebraic numbers, a common method is to represent them as a pair of a polynomial to which the algebraic number is a root, and an isolation interval. For example ${\sqrt {2}}$ may be unambiguously represented by $(x^{2}-2,[0,2]).$ Sturm's theorem provides a way for isolating real roots that is less efficient (for polynomials with integer coefficients) than other methods involving Descartes' rule of signs. However, it remains useful in some circumstances, mainly for theoretical purposes, for example for algorithms of real algebraic geometry that involve infinitesimals.[3] For isolating the real roots, one starts from an interval $(a,b]$ containing all the real roots, or the roots of interest (often, typically in physical problems, only positive roots are interesting), and one computes $V(a)$ and $V(b).$ For defining this starting interval, one may use bounds on the size of the roots (see Properties of polynomial roots § Bounds on (complex) polynomial roots). Then, one divides this interval in two, by choosing c in the middle of $(a,b].$ The computation of $V(c)$ provides the number of real roots in $(a,c]$ and $(c,b],$ and one may repeat the same operation on each subinterval. When one encounters, during this process an interval that does not contain any root, it may be suppressed from the list of intervals to consider. When one encounters an interval containing exactly one root, one may stop dividing it, as it is an isolation interval. The process stops eventually, when only isolating intervals remain. This isolating process may be used with any method for computing the number of real roots in an interval. Theoretical complexity analysis and practical experiences show that methods based on Descartes' rule of signs are more efficient. It follows that, nowadays, Sturm sequences are rarely used for root isolation. Application Generalized Sturm sequences allow counting the roots of a polynomial where another polynomial is positive (or negative), without computing these root explicitly. If one knows an isolating interval for a root of the first polynomial, this allows also finding the sign of the second polynomial at this particular root of the first polynomial, without computing a better approximation of the root. Let P(x) and Q(x) be two polynomials with real coefficients such that P and Q have no common root and P has no multiple roots. In other words, P and P' Q are coprime polynomials. This restriction does not really affect the generality of what follows as GCD computations allows reducing the general case to this case, and the cost of the computation of a Sturm sequence is the same as that of a GCD. Let W(a) denote the number of sign variations at a of a generalized Sturm sequence starting from P and P' Q. If a < b are two real numbers, then W(a) – W(b) is the number of roots of P in the interval $(a,b]$ such that Q(a) > 0 minus the number of roots in the same interval such that Q(a) < 0. Combined with the total number of roots of P in the same interval given by Sturm's theorem, this gives the number of roots of P such that Q(a) > 0 and the number of roots of P such that Q(a) < 0.[1] See also • Routh–Hurwitz theorem • Hurwitz's theorem (complex analysis) • Descartes' rule of signs • Rouché's theorem • Properties of polynomial roots • Gauss–Lucas theorem • Turán's inequalities References 1. (Basu, Pollack & Roy 2006) 2. O'Connor, John J.; Robertson, Edmund F. "Sturm's theorem". MacTutor History of Mathematics Archive. University of St Andrews. 3. (de Moura & Passmore 2013) • Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise (2006). "Section 2.2.2". Algorithms in real algebraic geometry (2nd ed.). Springer. pp. 52–57. ISBN 978-3-540-33098-1. • Sturm, Jacques Charles François (1829). "Mémoire sur la résolution des équations numériques". Bulletin des Sciences de Férussac. 11: 419–425. • Sylvester, J. J. (1853). "On a theory of the syzygetic relations of two rational integral functions, comprising an application to the theory of Sturm's functions, and that of the greatest algebraical common measure". Phil. Trans. R. Soc. Lond. 143: 407–548. doi:10.1098/rstl.1853.0018. JSTOR 108572. • Thomas, Joseph Miller (1941). "Sturm's theorem for multiple roots". National Mathematics Magazine. 15 (8): 391–394. doi:10.2307/3028551. JSTOR 3028551. MR 0005945. • Heindel, Lee E. (1971). Integer arithmetic algorithms for polynomial real zero determination. Proc. SYMSAC '71. p. 415. doi:10.1145/800204.806312. MR 0300434. S2CID 9971778. • de Moura, Leonardo; Passmore, Grant Olney (2013). "Computation in Real Closed Infinitesimal and Transcendental Extensions of the Rationals". Proc. CADE 2013. Lecture Notes in Computer Science. 7898: 178-192. doi:10.1007/978-3-642-38574-2_12. ISBN 978-3-642-38573-5. S2CID 9308312. • Panton, Don B.; Verdini, William A. (1981). "A fortran program for applying Sturm's theorem in counting internal rates of return". J. Financ. Quant. Anal. 16 (3): 381–388. doi:10.2307/2330245. JSTOR 2330245. S2CID 154334522. • Akritas, Alkiviadis G. (1982). "Reflections on a pair of theorems by Budan and Fourier". Math. Mag. 55 (5): 292–298. doi:10.2307/2690097. JSTOR 2690097. MR 0678195. • Pedersen, Paul (1991). "Multivariate Sturm theory". In Mattson, Harold F.; Mora, Teo; Rao, T. R. N. (eds.). Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, 9th International Symposium, AAECC-9, New Orleans, LA, USA, October 7–11, 1991, Proceedings. Lecture Notes in Computer Science. Vol. 539. Berlin: Springer. pp. 318–332. doi:10.1007/3-540-54522-0_120. ISBN 978-3-540-54522-4. MR 1229329. • Yap, Chee (2000). Fundamental Problems in Algorithmic Algebra. Oxford University Press. ISBN 0-19-512516-9. • Rahman, Q. I.; Schmeisser, G. (2002). Analytic theory of polynomials. London Mathematical Society Monographs. New Series. Vol. 26. Oxford: Oxford University Press. ISBN 0-19-853493-0. Zbl 1072.30006. • Baumol, William. Economic Dynamics, chapter 12, Section 3, "Qualitative information on real roots" • D.G. Hook and P. R. McAree, "Using Sturm Sequences To Bracket Real Roots of Polynomial Equations" in Graphic Gems I (A. Glassner ed.), Academic Press, pp. 416–422, 1990.
Wikipedia
Sturm–Picone comparison theorem In mathematics, in the field of ordinary differential equations, the Sturm–Picone comparison theorem, named after Jacques Charles François Sturm and Mauro Picone, is a classical theorem which provides criteria for the oscillation and non-oscillation of solutions of certain linear differential equations in the real domain. Let pi, qi for i = 1, 2 be real-valued continuous functions on the interval [a, b] and let 1. $(p_{1}(x)y^{\prime })^{\prime }+q_{1}(x)y=0$ 2. $(p_{2}(x)y^{\prime })^{\prime }+q_{2}(x)y=0$ be two homogeneous linear second order differential equations in self-adjoint form with $0<p_{2}(x)\leq p_{1}(x)$ and $q_{1}(x)\leq q_{2}(x).$ Let u be a non-trivial solution of (1) with successive roots at z1 and z2 and let v be a non-trivial solution of (2). Then one of the following properties holds. • There exists an x in (z1, z2) such that v(x) = 0; or • there exists a λ in R such that v(x) = λ u(x). The first part of the conclusion is due to Sturm (1836),[1] while the second (alternative) part of the theorem is due to Picone (1910)[2][3] whose simple proof was given using his now famous Picone identity. In the special case where both equations are identical one obtains the Sturm separation theorem.[4] Notes 1. C. Sturm, Mémoire sur les équations différentielles linéaires du second ordre, J. Math. Pures Appl. 1 (1836), 106–186 2. M. Picone, Sui valori eccezionali di un parametro da cui dipende un'equazione differenziale lineare ordinaria del second'ordine, Ann. Scuola Norm. Pisa 11 (1909), 1–141. 3. Hinton, D. (2005). "Sturm's 1836 Oscillation Results Evolution of the Theory". Sturm-Liouville Theory. pp. 1–27. doi:10.1007/3-7643-7359-8_1. ISBN 3-7643-7066-1. 4. For an extension of this important theorem to a comparison theorem involving three or more real second order equations see the Hartman–Mingarelli comparison theorem where a simple proof was given using the Mingarelli identity References • Diaz, J. B.; McLaughlin, Joyce R. Sturm comparison theorems for ordinary and partial differential equations. Bull. Amer. Math. Soc. 75 1969 335–339 • Heinrich Guggenheimer (1977) Applicable Geometry, page 79, Krieger, Huntington ISBN 0-88275-368-1 . • Teschl, G. (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Wikipedia
Sturm separation theorem In mathematics, in the field of ordinary differential equations, Sturm separation theorem, named after Jacques Charles François Sturm, describes the location of roots of solutions of homogeneous second order linear differential equations. Basically the theorem states that given two linear independent solutions of such an equation the zeros of the two solutions are alternating. Sturm separation theorem If u(x) and v(x) are two non-trivial continuous linearly independent solutions to a homogeneous second order linear differential equation with x0 and x1 being successive roots of u(x), then v(x) has exactly one root in the open interval (x0, x1). It is a special case of the Sturm-Picone comparison theorem. Proof Since $\displaystyle u$ and $\displaystyle v$ are linearly independent it follows that the Wronskian $\displaystyle W[u,v]$ must satisfy $W[u,v](x)\equiv W(x)\neq 0$ for all $\displaystyle x$ where the differential equation is defined, say $\displaystyle I$. Without loss of generality, suppose that $W(x)<0{\mbox{ }}\forall {\mbox{ }}x\in I$. Then $u(x)v'(x)-u'(x)v(x)\neq 0.$ So at $\displaystyle x=x_{0}$ $W(x_{0})=-u'\left(x_{0}\right)v\left(x_{0}\right)$ and either $u'\left(x_{0}\right)$ and $v\left(x_{0}\right)$ are both positive or both negative. Without loss of generality, suppose that they are both positive. Now, at $\displaystyle x=x_{1}$ $W(x_{1})=-u'\left(x_{1}\right)v\left(x_{1}\right)$ and since $\displaystyle x=x_{0}$ and $\displaystyle x=x_{1}$ are successive zeros of $\displaystyle u(x)$ it causes $u'\left(x_{1}\right)<0$. Thus, to keep $\displaystyle W(x)<0$ we must have $v\left(x_{1}\right)<0$. We see this by observing that if $\displaystyle u'(x)>0{\mbox{ }}\forall {\mbox{ }}x\in \left(x_{0},x_{1}\right]$ then $\displaystyle u(x)$ would be increasing (away from the $\displaystyle x$-axis), which would never lead to a zero at $\displaystyle x=x_{1}$. So for a zero to occur at $\displaystyle x=x_{1}$ at most $u'\left(x_{1}\right)=0$ (i.e., $u'\left(x_{1}\right)\leq 0$ and it turns out, by our result from the Wronskian that $u'\left(x_{1}\right)\leq 0$). So somewhere in the interval $\left(x_{0},x_{1}\right)$ the sign of $\displaystyle v(x)$ changed. By the Intermediate Value Theorem there exists $x^{*}\in \left(x_{0},x_{1}\right)$ such that $v\left(x^{*}\right)=0$. On the other hand, there can be only one zero in $\left(x_{0},x_{1}\right)$, because otherwise v would have two zeros and there would be no zeros of u in between, and it was just proved that this is impossible. References • Teschl, G. (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Wikipedia
Sturm series In mathematics, the Sturm series[1] associated with a pair of polynomials is named after Jacques Charles François Sturm. Definition Further information: Sturm chain Let $p_{0}$ and $p_{1}$ two univariate polynomials. Suppose that they do not have a common root and the degree of $p_{0}$ is greater than the degree of $p_{1}$. The Sturm series is constructed by: $p_{i}:=p_{i+1}q_{i+1}-p_{i+2}{\text{ for }}i\geq 0.$ This is almost the same algorithm as Euclid's but the remainder $p_{i+2}$ has negative sign. Sturm series associated to a characteristic polynomial Let us see now Sturm series $p_{0},p_{1},\dots ,p_{k}$ associated to a characteristic polynomial $P$ in the variable $\lambda $: $P(\lambda )=a_{0}\lambda ^{k}+a_{1}\lambda ^{k-1}+\cdots +a_{k-1}\lambda +a_{k}$ where $a_{i}$ for $i$ in $\{1,\dots ,k\}$ are rational functions in $\mathbb {R} (Z)$ with the coordinate set $Z$. The series begins with two polynomials obtained by dividing $P(\imath \mu )$ by $\imath ^{k}$ where $\imath $ represents the imaginary unit equal to ${\sqrt {-1}}$ and separate real and imaginary parts: ${\begin{aligned}p_{0}(\mu )&:=\Re \left({\frac {P(\imath \mu )}{\imath ^{k}}}\right)=a_{0}\mu ^{k}-a_{2}\mu ^{k-2}+a_{4}\mu ^{k-4}\pm \cdots \\p_{1}(\mu )&:=-\Im \left({\frac {P(\imath \mu )}{\imath ^{k}}}\right)=a_{1}\mu ^{k-1}-a_{3}\mu ^{k-3}+a_{5}\mu ^{k-5}\pm \cdots \end{aligned}}$ The remaining terms are defined with the above relation. Due to the special structure of these polynomials, they can be written in the form: $p_{i}(\mu )=c_{i,0}\mu ^{k-i}+c_{i,1}\mu ^{k-i-2}+c_{i,2}\mu ^{k-i-4}+\cdots $ In these notations, the quotient $q_{i}$ is equal to $(c_{i-1,0}/c_{i,0})\mu $ which provides the condition $c_{i,0}\neq 0$. Moreover, the polynomial $p_{i}$ replaced in the above relation gives the following recursive formulas for computation of the coefficients $c_{i,j}$. $c_{i+1,j}=c_{i,j+1}{\frac {c_{i-1,0}}{c_{i,0}}}-c_{i-1,j+1}={\frac {1}{c_{i,0}}}\det {\begin{pmatrix}c_{i-1,0}&c_{i-1,j+1}\\c_{i,0}&c_{i,j+1}\end{pmatrix}}.$ If $c_{i,0}=0$ for some $i$, the quotient $q_{i}$ is a higher degree polynomial and the sequence $p_{i}$ stops at $p_{h}$ with $h<k$. References 1. (in French) C. F. Sturm. Résolution des équations algébriques. Bulletin de Férussac. 11:419–425. 1829.
Wikipedia
Sturmian word In mathematics, a Sturmian word (Sturmian sequence or billiard sequence[1]), named after Jacques Charles François Sturm, is a certain kind of infinitely long sequence of characters. Such a sequence can be generated by considering a game of English billiards on a square table. The struck ball will successively hit the vertical and horizontal edges labelled 0 and 1 generating a sequence of letters.[2] This sequence is a Sturmian word. Definition Sturmian sequences can be defined strictly in terms of their combinatoric properties or geometrically as cutting sequences for lines of irrational slope or codings for irrational rotations. They are traditionally taken to be infinite sequences on the alphabet of the two symbols 0 and 1. Sequences of low complexity For an infinite sequence of symbols w, let σ(n) be the complexity function of w; i.e., σ(n) = the number of distinct contiguous subwords (factors) in w of length n. Then w is Sturmian if σ(n) = n + 1 for all n. Balanced sequences A set X of binary strings is called balanced if the Hamming weight of elements of X takes at most two distinct values. That is, for any $s\in X$ |s|1 = k or |s|1 = k' where |s|1 is the number of 1s in s. Let w be an infinite sequence of 0s and 1s and let ${\mathcal {L}}_{n}(w)$ denote the set of all length-n subwords of w. The sequence w is Sturmian if ${\mathcal {L}}_{n}(w)$ is balanced for all n and w is not eventually periodic. Cutting sequence of irrational Let w be an infinite sequence of 0s and 1s. The sequence w is Sturmian if for some $x\in [0,1)$ and some irrational $\theta \in (0,\infty )$, w is realized as the cutting sequence of the line $f(t)=\theta t+x$. Difference of Beatty sequences Let w = (wn) be an infinite sequence of 0s and 1s. The sequence w is Sturmian if it is the difference of non-homogeneous Beatty sequences, that is, for some $x\in [0,1)$ and some irrational $\theta \in (0,1)$ $w_{n}=\lfloor n\theta +x\rfloor -\lfloor (n-1)\theta +x\rfloor $ for all $n$ or $w_{n}=\lceil n\theta +x\rceil -\lceil (n-1)\theta +x\rceil $ for all $n$. Coding of irrational rotation For $\theta \in [0,1)$, define $T_{\theta }:[0,1)\to [0,1)$ by $t\mapsto t+\theta {\bmod {1}}$. For $x\in [0,1)$ define the θ-coding of x to be the sequence (xn) where $x_{n}={\begin{cases}1&{\text{if }}T_{\theta }^{n}(x)\in [0,\theta ),\\0&{\text{else}}.\end{cases}}$ Let w be an infinite sequence of 0s and 1s. The sequence w is Sturmian if for some $x\in [0,1)$ and some irrational $\theta \in (0,\infty )$, w is the θ-coding of x. Discussion Example A famous example of a (standard) Sturmian word is the Fibonacci word;[3] its slope is $1/\phi $, where $\phi $ is the golden ratio. Balanced aperiodic sequences A set S of finite binary words is balanced if for each n the subset Sn of words of length n has the property that the Hamming weight of the words in Sn takes at most two distinct values. A balanced sequence is one for which the set of factors is balanced. A balanced sequence has at most n+1 distinct factors of length n.[4]: 43  An aperiodic sequence is one which does not consist of a finite sequence followed by a finite cycle. An aperiodic sequence has at least n + 1 distinct factors of length n.[4]: 43  A sequence is Sturmian if and only if it is balanced and aperiodic.[4]: 43  Slope and intercept A sequence $(a_{n})_{n\in \mathbb {N} }$ over {0,1} is a Sturmian word if and only if there exist two real numbers, the slope $\alpha $ and the intercept $\rho $, with $\alpha $ irrational, such that $a_{n}=\lfloor \alpha (n+1)+\rho \rfloor -\lfloor \alpha n+\rho \rfloor -\lfloor \alpha \rfloor $ for all $n$.[5]: 284 [6]: 152  Thus a Sturmian word provides a discretization of the straight line with slope $\alpha $ and intercept ρ. Without loss of generality, we can always assume $0<\alpha <1$, because for any integer k we have $\lfloor (\alpha +k)(n+1)+\rho \rfloor -\lfloor (\alpha +k)n+\rho \rfloor -\lfloor \alpha +k\rfloor =a_{n}.$ All the Sturmian words corresponding to the same slope $\alpha $ have the same set of factors; the word $c_{\alpha }$ corresponding to the intercept $\rho =0$ is the standard word or characteristic word of slope $\alpha $.[5]: 283  Hence, if $0<\alpha <1$, the characteristic word $c_{\alpha }$ is the first difference of the Beatty sequence corresponding to the irrational number $\alpha $. The standard word $c_{\alpha }$ is also the limit of a sequence of words $(s_{n})_{n\geq 0}$ defined recursively as follows: Let $[0;d_{1}+1,d_{2},\ldots ,d_{n},\ldots ]$ be the continued fraction expansion of $\alpha $, and define • $s_{0}=1$ • $s_{1}=0$ • $s_{n+1}=s_{n}^{d_{n}}s_{n-1}{\text{ for }}n>0$ where the product between words is just their concatenation. Every word in the sequence $(s_{n})_{n>0}$ is a prefix of the next ones, so that the sequence itself converges to an infinite word, which is $c_{\alpha }$. The infinite sequence of words $(s_{n})_{n\geq 0}$ defined by the above recursion is called the standard sequence for the standard word $c_{\alpha }$, and the infinite sequence d = (d1, d2, d3, ...) of nonnegative integers, with d1 ≥ 0 and dn > 0 (n ≥ 2), is called its directive sequence. A Sturmian word w over {0,1} is characteristic if and only if both 0w and 1w are Sturmian.[7] Frequencies If s is an infinite sequence word and w is a finite word, let μN(w) denote the number of occurrences of w as a factor in the prefix of s of length N + |w| − 1. If μN(w) has a limit as N→∞, we call this the frequency of w, denoted by μ(w).[4]: 73  For a Sturmian word s, every finite factor has a frequency. The three-gap theorem implies that the factors of fixed length n have at most three distinct frequencies, and if there are three values then one is the sum of the other two.[4]: 73  Non-binary words For words over an alphabet of size k greater than 2, we define a Sturmian word to be one with complexity function n + k − 1.[6]: 6  They can be described in terms of cutting sequences for k-dimensional space.[6]: 84  An alternative definition is as words of minimal complexity subject to not being ultimately periodic.[6]: 85  Associated real numbers A real number for which the digits with respect to some fixed base form a Sturmian word is a transcendental number.[6]: 64, 85  Sturmian endomorphisms An endomorphism of the free monoid B∗ on a 2-letter alphabet B is Sturmian if it maps every Sturmian word to a Sturmian word[8][9] and locally Sturmian if it maps some Sturmian word to a Sturmian word.[10] The Sturmian endomorphisms form a submonoid of the monoid of endomorphisms of B∗.[8] Define endomorphisms φ and ψ of B∗, where B = {0,1}, by φ(0) = 01, φ(1) = 0 and ψ(0) = 10, ψ(1) = 0. Then I, φ and ψ are Sturmian,[11] and the Sturmian endomorphisms of B∗ are precisely those endomorphisms in the submonoid of the endomorphism monoid generated by {I,φ,ψ}.[9][10][7] A morphism is Sturmian if and only if the image of the word 10010010100101 is a balanced sequence; that is, for each n, the Hamming weights of the subwords of length n take at most two distinct values.[9][12] History Although the study of Sturmian words dates back to Johann III Bernoulli (1772),[13][5]: 295  it was Gustav A. Hedlund and Marston Morse in 1940 who coined the term Sturmian to refer to such sequences,[5]: 295 [14] in honor of the mathematician Jacques Charles François Sturm due to the relation with the Sturm comparison theorem.[6]: 114  See also • Cutting sequence • Word (group theory) • Morphic word • Lyndon word References 1. Hordijk, A.; Laan, D. A. (2001). "Bounds for Deterministic Periodic Routing sequences". Integer Programming and Combinatorial Optimization. Lecture Notes in Computer Science. Vol. 2081. p. 236. doi:10.1007/3-540-45535-3_19. ISBN 978-3-540-42225-9. 2. Győri, Ervin; Sós, Vera (2009). Recent Trends in Combinatorics: The Legacy of Paul Erdős. Cambridge University Press. p. 117. ISBN 978-0-521-12004-3. 3. de Luca, Aldo (1995). "A division property of the Fibonacci word". Information Processing Letters. 54 (6): 307–312. doi:10.1016/0020-0190(95)00067-M. 4. Lothaire, M. (2002). "Sturmian Words". Algebraic Combinatorics on Words. Cambridge: Cambridge University Press. ISBN 0-521-81220-8. Zbl 1001.68093. Retrieved 2007-02-25. 5. Allouche, Jean-Paul; Shallit, Jeffrey (2003). Automatic Sequences: Theory, Applications, Generalizations. Cambridge University Press. ISBN 978-0-521-82332-6. Zbl 1086.11015. 6. Pytheas Fogg, N. (2002). Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, A. (eds.). Substitutions in dynamics, arithmetics and combinatorics. Lecture Notes in Mathematics. Vol. 1794. Berlin: Springer-Verlag. ISBN 3-540-44141-7. Zbl 1014.11015. 7. Berstel, J.; Séébold, P. (1994). "A remark on morphic Sturmian words". RAIRO, Inform. Théor. Appl. 2. 8 (3–4): 255–263. doi:10.1051/ita/1994283-402551. ISSN 0988-3754. Zbl 0883.68104. 8. Lothaire (2011, p. 83) 9. Pytheas Fogg (2002, p. 197) 10. Lothaire (2011, p. 85) 11. Lothaire (2011, p. 84) 12. Berstel, Jean; Séébold, Patrice (1993), "A characterization of Sturmian morphisms", in Borzyszkowski, Andrzej M.; Sokołowski, Stefan (eds.), Mathematical Foundations of Computer Science 1993. 18th International Symposium, MFCS'93 Gdańsk, Poland, August 30–September 3, 1993 Proceedings, Lecture Notes in Computer Science, vol. 711, pp. 281–290, doi:10.1007/3-540-57182-5_20, ISBN 978-3-540-57182-7, Zbl 0925.11026 13. J. Bernoulli III, Sur une nouvelle espece de calcul, Recueil pour les Astronomes, vol. 1, Berlin, 1772, pp. 255–284 14. Morse, M.; Hedlund, G. A. (1940). "Symbolic Dynamics II. Sturmian Trajectories". American Journal of Mathematics. 62 (1): 1–42. doi:10.2307/2371431. JSTOR 2371431. Further reading • Bugeaud, Yann (2012). Distribution modulo one and Diophantine approximation. Cambridge Tracts in Mathematics. Vol. 193. Cambridge: Cambridge University Press. ISBN 978-0-521-11169-0. Zbl 1260.11001. • Lothaire, M. (2011). Algebraic combinatorics on words. Encyclopedia of Mathematics and Its Applications. Vol. 90. With preface by Jean Berstel and Dominique Perrin (Reprint of the 2002 hardback ed.). Cambridge University Press. ISBN 978-0-521-18071-9. Zbl 1221.68183.
Wikipedia
Icelandic Junior College Mathematics Competition Icelandic Junior College Mathematics Competition (Stærðfræðikeppni framhaldsskólanema) is an annual mathematical olympiad first held in the winter of 1984–1985. It is hosted by the Icelandic Mathematical Organization (Íslenska stærðfræðafélagið) and the Natural Sciences's Teacher Association, and the largest competition of its kind in the country. Its goals include increasing the interest of Icelandic secondary school students towards mathematics, and other fields built on a mathematical foundation. The contest is held in two parts every winter. First, a qualifier held in October of every year on two difficulty levels; upper level, and lower level. The lower level is intended for first year secondary school students, and the upper level for older students. Those who do well in the qualifier are invited to the final competition, held in March. Alongside honours and awards, the top students are selected to perform in various mathematical olympiads, including the Baltic Way, the Nordic Mathematical Contest, and the International Mathematical Olympiad.[1] References 1. "Um keppnina". stae.is (in Icelandic). Iceland: Stærðfræðifélag Íslands. Archived from the original on 14 September 2017. Retrieved 14 September 2017.
Wikipedia
Størmer number In mathematics, a Størmer number or arc-cotangent irreducible number is a positive integer $n$ for which the greatest prime factor of $n^{2}+1$ is greater than or equal to $2n$. They are named after Carl Størmer. Sequence The first few Størmer numbers are: 1, 2, 4, 5, 6, 9, 10, 11, 12, 14, 15, 16, 19, 20, ... (sequence A005528 in the OEIS). Density John Todd proved that this sequence is neither finite nor cofinite.[1] Unsolved problem in mathematics: What is the natural density of the Størmer numbers? (more unsolved problems in mathematics) More precisely, the natural density of the Størmer numbers lies between 0.5324 and 0.905. It has been conjectured that their natural density is the natural logarithm of 2, approximately 0.693, but this remains unproven.[2] Because the Størmer numbers have positive density, the Størmer numbers form a large set. Application The Størmer numbers arise in connection with the problem of representing the Gregory numbers (arctangents of rational numbers) $G_{a/b}=\arctan {\frac {b}{a}}$ as sums of Gregory numbers for integers (arctangents of unit fractions). The Gregory number $G_{a/b}$ may be decomposed by repeatedly multiplying the Gaussian integer $a+bi$ by numbers of the form $n\pm i$, in order to cancel prime factors $p$ from the imaginary part; here $n$ is chosen to be a Størmer number such that $n^{2}+1$ is divisible by $p$.[3] References 1. Todd, John (1949), "A problem on arc tangent relations", American Mathematical Monthly, 56 (8): 517–528, doi:10.2307/2305526, JSTOR 2305526, MR 0031496. 2. Everest, Graham; Harman, Glyn (2008), "On primitive divisors of $n^{2}+b$", Number theory and polynomials, London Math. Soc. Lecture Note Ser., vol. 352, Cambridge Univ. Press, Cambridge, pp. 142–154, arXiv:math/0701234, doi:10.1017/CBO9780511721274.011, MR 2428520. See in particular Theorem 1.4 and Conjecture 1.5. 3. Conway, John H.; Guy, R. K. (1996), The Book of Numbers, New York: Copernicus Press, pp. 245–248. See in particular p. 245, para. 3. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Special unitary group In mathematics, the special unitary group of degree n, denoted SU(n), is the Lie group of n × n unitary matrices with determinant 1. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Lie groups and Lie algebras Classical groups • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Simple Lie groups Classical • An • Bn • Cn • Dn Exceptional • G2 • F4 • E6 • E7 • E8 Other Lie groups • Circle • Lorentz • Poincaré • Conformal group • Diffeomorphism • Loop • Euclidean Lie algebras • Lie group–Lie algebra correspondence • Exponential map • Adjoint representation • Killing form • Index • Simple Lie algebra • Loop algebra • Affine Lie algebra Semisimple Lie algebra • Dynkin diagrams • Cartan subalgebra • Root system • Weyl group • Real form • Complexification • Split Lie algebra • Compact Lie algebra Representation theory • Lie group representation • Lie algebra representation • Representation theory of semisimple Lie algebras • Representations of classical Lie groups • Theorem of the highest weight • Borel–Weil–Bott theorem Lie groups in physics • Particle physics and representation theory • Lorentz group representations • Poincaré group representations • Galilean group representations Scientists • Sophus Lie • Henri Poincaré • Wilhelm Killing • Élie Cartan • Hermann Weyl • Claude Chevalley • Harish-Chandra • Armand Borel • Glossary • Table of Lie groups The matrices of the more general unitary group may have complex determinants with absolute value 1, rather than real 1 in the special case. The group operation is matrix multiplication. The special unitary group is a normal subgroup of the unitary group U(n), consisting of all n×n unitary matrices. As a compact classical group, U(n) is the group that preserves the standard inner product on $\mathbb {C} ^{n}$.[lower-alpha 1] It is itself a subgroup of the general linear group, $\operatorname {SU} (n)\subset \operatorname {U} (n)\subset \operatorname {GL} (n,\mathbb {C} )$. The SU(n) groups find wide application in the Standard Model of particle physics, especially SU(2) in the electroweak interaction and SU(3) in quantum chromodynamics.[1] The groups SU(2n) are important in quantum computing, as they represent the possible quantum logic gate operations in a quantum circuit with $n$ qubits and thus $2^{n}$ basis states. (Alternatively, the more general unitary group $U(2^{n})$ can be used, since multiplying by a global phase factor $e^{i\varphi }$ does not change the expectation values of a quantum operator.) The simplest case, SU(1), is the trivial group, having only a single element. The group SU(2) is isomorphic to the group of quaternions of norm 1, and is thus diffeomorphic to the 3-sphere. Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign), there is a surjective homomorphism from SU(2) to the rotation group SO(3) whose kernel is {+I, −I}.[lower-alpha 2] SU(2) is also identical to one of the symmetry groups of spinors, Spin(3), that enables a spinor presentation of rotations. Properties The special unitary group SU(n) is a strictly real Lie group (vs. a more general complex Lie group). Its dimension as a real manifold is n2 − 1 . Topologically, it is compact and simply connected.[2] Algebraically, it is a simple Lie group (meaning its Lie algebra is simple; see below).[3] The center of SU(n) is isomorphic to the cyclic group $\mathbb {Z} /n\mathbb {Z} $, and is composed of the diagonal matrices ζ I for ζ an n‑th root of unity and I the n×n identity matrix. Its outer automorphism group for n ≥ 3 is $\,\mathbb {Z} /2\mathbb {Z} \,,$ while the outer automorphism group of SU(2) is the trivial group. A maximal torus of rank n − 1 is given by the set of diagonal matrices with determinant 1. The Weyl group of SU(n) is the symmetric group Sn, which is represented by signed permutation matrices (the signs being necessary to ensure the determinant is 1). The Lie algebra of SU(n), denoted by ${\mathfrak {su}}(n)$, can be identified with the set of traceless anti‑Hermitian n×n complex matrices, with the regular commutator as a Lie bracket. Particle physicists often use a different, equivalent representation: The set of traceless Hermitian n×n complex matrices with Lie bracket given by −i times the commutator. Lie algebra Main article: Classical group § U(p, q) and U(n) – the unitary groups The Lie algebra ${\mathfrak {su}}(n)$ of $\operatorname {SU} (n)$ consists of $n\times n$ skew-Hermitian matrices with trace zero.[4] This (real) Lie algebra has dimension $n^{2}-1$. More information about the structure of this Lie algebra can be found below in the section "Lie algebra structure." Fundamental representation In the physics literature, it is common to identify the Lie algebra with the space of trace-zero Hermitian (rather than the skew-Hermitian) matrices. That is to say, the physicists' Lie algebra differs by a factor of $i$ from the mathematicians'. With this convention, one can then choose generators Ta that are traceless Hermitian complex n×n matrices, where: $T_{a}\,T_{b}={\tfrac {1}{\,2n\,}}\,\delta _{ab}\,I_{n}+{\tfrac {1}{2}}\,\sum _{c=1}^{n^{2}-1}\left(if_{abc}+d_{abc}\right)\,T_{c}$ where the f are the structure constants and are antisymmetric in all indices, while the d-coefficients are symmetric in all indices. As a consequence, the commutator is: $~\left[T_{a},\,T_{b}\right]~=~i\sum _{c=1}^{n^{2}-1}\,f_{abc}\,T_{c}\;,$ and the corresponding anticommutator is: $\left\{T_{a},\,T_{b}\right\}~=~{\tfrac {1}{n}}\,\delta _{ab}\,I_{n}+\sum _{c=1}^{n^{2}-1}{d_{abc}\,T_{c}}~.$ The factor of $i$ in the commutation relation arises from the physics convention and is not present when using the mathematicians' convention. The conventional normalization condition is $\sum _{c,e=1}^{n^{2}-1}d_{ace}\,d_{bce}={\tfrac {\,n^{2}-4\,}{n}}\,\delta _{ab}~.$ Adjoint representation In the (n2 − 1) -dimensional adjoint representation, the generators are represented by (n2 − 1) × (n2 − 1) matrices, whose elements are defined by the structure constants themselves: $\left(T_{a}\right)_{jk}=-if_{ajk}.$ The group SU(2) See also: Versor, Pauli matrices, 3D rotation group § A note on Lie algebras, and Representation theory of SU(2) Using matrix multiplication for the binary operation, SU(2) forms a group,[5] $\operatorname {SU} (2)=\left\{{\begin{pmatrix}\alpha &-{\overline {\beta }}\\\beta &{\overline {\alpha }}\end{pmatrix}}:\ \ \alpha ,\beta \in \mathbb {C} ,|\alpha |^{2}+|\beta |^{2}=1\right\}~,$ where the overline denotes complex conjugation. Diffeomorphism with the 3-sphere S3 If we consider $\alpha ,\beta $ as a pair in $\mathbb {C} ^{2}$ where $\alpha =a+bi$ and $\beta =c+di$, then the equation $|\alpha |^{2}+|\beta |^{2}=1$ becomes $a^{2}+b^{2}+c^{2}+d^{2}=1$ This is the equation of the 3-sphere S3. This can also be seen using an embedding: the map ${\begin{aligned}\varphi \colon \mathbb {C} ^{2}\to {}&\operatorname {M} (2,\mathbb {C} )\\[5pt]\varphi (\alpha ,\beta )={}&{\begin{pmatrix}\alpha &-{\overline {\beta }}\\\beta &{\overline {\alpha }}\end{pmatrix}},\end{aligned}}$ where $\operatorname {M} (2,\mathbb {C} )$ denotes the set of 2 by 2 complex matrices, is an injective real linear map (by considering $\mathbb {C} ^{2}$ diffeomorphic to $\mathbb {R} ^{4}$ and $\operatorname {M} (2,\mathbb {C} )$ diffeomorphic to $\mathbb {R} ^{8}$). Hence, the restriction of φ to the 3-sphere (since modulus is 1), denoted S3, is an embedding of the 3-sphere onto a compact submanifold of $\operatorname {M} (2,\mathbb {C} )$, namely φ(S3) = SU(2). Therefore, as a manifold, S3 is diffeomorphic to SU(2), which shows that SU(2) is simply connected and that S3 can be endowed with the structure of a compact, connected Lie group. Isomorphism with group of versors Quaternions of norm 1 are called versors since they generate the rotation group SO(3): The SU(2) matrix: ${\begin{pmatrix}a+bi&c+di\\-c+di&a-bi\end{pmatrix}}\quad (a,b,c,d\in \mathbb {R} )$ can be mapped to the quaternion $a\,{\hat {1}}+b\,{\hat {i}}+c\,{\hat {j}}+d\,{\hat {k}}$ This map is in fact a group isomorphism. Additionally, the determinant of the matrix is the squared norm of the corresponding quaternion. Clearly any matrix in SU(2) is of this form and, since it has determinant 1, the corresponding quaternion has norm 1. Thus SU(2) is isomorphic to the group of versors.[6] Relation to spatial rotations Main articles: 3D rotation group § Connection between SO(3) and SU(2), and Quaternions and spatial rotation Every versor is naturally associated to a spatial rotation in 3 dimensions, and the product of versors is associated to the composition of the associated rotations. Furthermore, every rotation arises from exactly two versors in this fashion. In short: there is a 2:1 surjective homomorphism from SU(2) to SO(3); consequently SO(3) is isomorphic to the quotient group SU(2)/{±I}, the manifold underlying SO(3) is obtained by identifying antipodal points of the 3-sphere S3 , and SU(2) is the universal cover of SO(3). Lie algebra The Lie algebra of SU(2) consists of $2\times 2$ skew-Hermitian matrices with trace zero.[7] Explicitly, this means ${\mathfrak {su}}(2)=\left\{{\begin{pmatrix}i\ a&-{\overline {z}}\\z&-i\ a\end{pmatrix}}:\ a\in \mathbb {R} ,z\in \mathbb {C} \right\}~.$ The Lie algebra is then generated by the following matrices, $u_{1}={\begin{pmatrix}0&i\\i&0\end{pmatrix}},\quad u_{2}={\begin{pmatrix}0&-1\\1&0\end{pmatrix}},\quad u_{3}={\begin{pmatrix}i&0\\0&-i\end{pmatrix}}~,$ which have the form of the general element specified above. This can also be written as ${\mathfrak {su}}(2)=\operatorname {span} \left\{i\sigma _{1},i\sigma _{2},i\sigma _{3}\right\}$ using the Pauli matrices. These satisfy the quaternion relationships $u_{2}\ u_{3}=-u_{3}\ u_{2}=u_{1}~,$ $u_{3}\ u_{1}=-u_{1}\ u_{3}=u_{2}~,$ and $u_{1}u_{2}=-u_{2}\ u_{1}=u_{3}~.$ The commutator bracket is therefore specified by $\left[u_{3},u_{1}\right]=2\ u_{2},\quad \left[u_{1},u_{2}\right]=2\ u_{3},\quad \left[u_{2},u_{3}\right]=2\ u_{1}~.$ The above generators are related to the Pauli matrices by $u_{1}=i\ \sigma _{1}~,\,u_{2}=-i\ \sigma _{2}$ and $u_{3}=+i\ \sigma _{3}~.$ This representation is routinely used in quantum mechanics to represent the spin of fundamental particles such as electrons. They also serve as unit vectors for the description of our 3 spatial dimensions in loop quantum gravity. They also correspond to the Pauli X, Y, and Z gates, which are standard generators for the single qubit gates, corresponding to 3d-rotations about the axes of the Bloch sphere. The Lie algebra serves to work out the representations of SU(2). The group SU(3) $SU(3)$ is an 8-dimensional simple Lie group consisting of all 3 × 3 unitary matrices with determinant 1. Topology The group $SU(3)$ is a simply-connected, compact Lie group.[8] Its topological structure can be understood by noting that SU(3) acts transitively on the unit sphere $S^{5}$ in $\mathbb {C} ^{3}\cong \mathbb {R} ^{6}$. The stabilizer of an arbitrary point in the sphere is isomorphic to SU(2), which topologically is a 3-sphere. It then follows that SU(3) is a fiber bundle over the base $S^{5}$ with fiber $S^{3}$. Since the fibers and the base are simply connected, the simple connectedness of SU(3) then follows by means of a standard topological result (the long exact sequence of homotopy groups for fiber bundles).[9] The $SU(2)$-bundles over $S^{5}$ are classified by $\pi _{4}{\mathord {\left(S^{3}\right)}}=\mathbb {Z} _{2}$ since any such bundle can be constructed by looking at trivial bundles on the two hemispheres $S_{N}^{5},S_{S}^{5}$ and looking at the transition function on their intersection, which is homotopy equivalent to $S^{4}$, so $S_{N}^{5}\cap S_{S}^{5}\simeq S^{4}$ Then, all such transition functions are classified by homotopy classes of maps $\left[S^{4},SU(2)\right]\cong \left[S^{4},S^{3}\right]=\pi _{4}{\mathord {\left(S^{3}\right)}}\cong \mathbb {Z} /2$ and as $\pi _{4}(SU(3))=\{0\}$ rather than $\mathbb {Z} /2$, $SU(3)$ cannot be the trivial bundle $SU(2)\times S^{5}\cong S^{3}\times S^{5}$, and therefore must be the unique nontrivial (twisted) bundle. This can be shown by looking at the induced long exact sequence on homotopy groups. Representation theory The representation theory of $SU(3)$ is well-understood.[10] Descriptions of these representations, from the point of view of its complexified Lie algebra ${\mathfrak {sl}}(3;\mathbb {C} )$, may be found in the articles on Lie algebra representations or the Clebsch–Gordan coefficients for SU(3). Lie algebra The generators, T, of the Lie algebra ${\mathfrak {su}}(3)$ of $SU(3)$ in the defining (particle physics, Hermitian) representation, are $T_{a}={\frac {\lambda _{a}}{2}}~,$ where λa, the Gell-Mann matrices, are the SU(3) analog of the Pauli matrices for SU(2): ${\begin{aligned}\lambda _{1}={}&{\begin{pmatrix}0&1&0\\1&0&0\\0&0&0\end{pmatrix}},&\lambda _{2}={}&{\begin{pmatrix}0&-i&0\\i&0&0\\0&0&0\end{pmatrix}},&\lambda _{3}={}&{\begin{pmatrix}1&0&0\\0&-1&0\\0&0&0\end{pmatrix}},\\[6pt]\lambda _{4}={}&{\begin{pmatrix}0&0&1\\0&0&0\\1&0&0\end{pmatrix}},&\lambda _{5}={}&{\begin{pmatrix}0&0&-i\\0&0&0\\i&0&0\end{pmatrix}},\\[6pt]\lambda _{6}={}&{\begin{pmatrix}0&0&0\\0&0&1\\0&1&0\end{pmatrix}},&\lambda _{7}={}&{\begin{pmatrix}0&0&0\\0&0&-i\\0&i&0\end{pmatrix}},&\lambda _{8}={\frac {1}{\sqrt {3}}}&{\begin{pmatrix}1&0&0\\0&1&0\\0&0&-2\end{pmatrix}}.\end{aligned}}$ These λa span all traceless Hermitian matrices H of the Lie algebra, as required. Note that λ2, λ5, λ7 are antisymmetric. They obey the relations ${\begin{aligned}\left[T_{a},T_{b}\right]&=i\sum _{c=1}^{8}f_{abc}T_{c},\\\left\{T_{a},T_{b}\right\}&={\frac {1}{3}}\delta _{ab}I_{3}+\sum _{c=1}^{8}d_{abc}T_{c},\end{aligned}}$ or, equivalently, $\{\lambda _{a},\lambda _{b}\}={\frac {4}{3}}\delta _{ab}I_{3}+2\sum _{c=1}^{8}{d_{abc}\lambda _{c}}$. The f are the structure constants of the Lie algebra, given by ${\begin{aligned}f_{123}&=1,\\f_{147}=-f_{156}=f_{246}=f_{257}=f_{345}=-f_{367}&={\frac {1}{2}},\\f_{458}=f_{678}&={\frac {\sqrt {3}}{2}},\end{aligned}}$ while all other fabc not related to these by permutation are zero. In general, they vanish unless they contain an odd number of indices from the set {2, 5, 7}.[lower-alpha 3] The symmetric coefficients d take the values ${\begin{aligned}d_{118}=d_{228}=d_{338}=-d_{888}&={\frac {1}{\sqrt {3}}}\\d_{448}=d_{558}=d_{668}=d_{778}&=-{\frac {1}{2{\sqrt {3}}}}\\d_{344}=d_{355}=-d_{366}=-d_{377}=-d_{247}=d_{146}=d_{157}=d_{256}&={\frac {1}{2}}~.\end{aligned}}$ They vanish if the number of indices from the set {2, 5, 7} is odd. A generic SU(3) group element generated by a traceless 3×3 Hermitian matrix H, normalized as tr(H2) = 2, can be expressed as a second order matrix polynomial in H:[11] ${\begin{aligned}\exp(i\theta H)={}&\left[-{\frac {1}{3}}I\sin \left(\varphi +{\frac {2\pi }{3}}\right)\sin \left(\varphi -{\frac {2\pi }{3}}\right)-{\frac {1}{2{\sqrt {3}}}}~H\sin(\varphi )-{\frac {1}{4}}~H^{2}\right]{\frac {\exp \left({\frac {2}{\sqrt {3}}}~i\theta \sin(\varphi )\right)}{\cos \left(\varphi +{\frac {2\pi }{3}}\right)\cos \left(\varphi -{\frac {2\pi }{3}}\right)}}\\[6pt]&{}+\left[-{\frac {1}{3}}~I\sin(\varphi )\sin \left(\varphi -{\frac {2\pi }{3}}\right)-{\frac {1}{2{\sqrt {3}}}}~H\sin \left(\varphi +{\frac {2\pi }{3}}\right)-{\frac {1}{4}}~H^{2}\right]{\frac {\exp \left({\frac {2}{\sqrt {3}}}~i\theta \sin \left(\varphi +{\frac {2\pi }{3}}\right)\right)}{\cos(\varphi )\cos \left(\varphi -{\frac {2\pi }{3}}\right)}}\\[6pt]&{}+\left[-{\frac {1}{3}}~I\sin(\varphi )\sin \left(\varphi +{\frac {2\pi }{3}}\right)-{\frac {1}{2{\sqrt {3}}}}~H\sin \left(\varphi -{\frac {2\pi }{3}}\right)-{\frac {1}{4}}~H^{2}\right]{\frac {\exp \left({\frac {2}{\sqrt {3}}}~i\theta \sin \left(\varphi -{\frac {2\pi }{3}}\right)\right)}{\cos(\varphi )\cos \left(\varphi +{\frac {2\pi }{3}}\right)}}\end{aligned}}$ where $\varphi \equiv {\frac {1}{3}}\left[\arccos \left({\frac {3{\sqrt {3}}}{2}}\det H\right)-{\frac {\pi }{2}}\right].$ Lie algebra structure As noted above, the Lie algebra ${\mathfrak {su}}(n)$ of $\operatorname {SU} (n)$ consists of $n\times n$ skew-Hermitian matrices with trace zero.[12] The complexification of the Lie algebra ${\mathfrak {su}}(n)$ is ${\mathfrak {sl}}(n;\mathbb {C} )$, the space of all $n\times n$ complex matrices with trace zero.[13] A Cartan subalgebra then consists of the diagonal matrices with trace zero,[14] which we identify with vectors in $\mathbb {C} ^{n}$ whose entries sum to zero. The roots then consist of all the n(n − 1) permutations of (1, −1, 0, ..., 0). A choice of simple roots is ${\begin{aligned}(&1,-1,0,\dots ,0,0),\\(&0,1,-1,\dots ,0,0),\\&\vdots \\(&0,0,0,\dots ,1,-1).\end{aligned}}$ So, SU(n) is of rank n − 1 and its Dynkin diagram is given by An−1, a chain of n − 1 nodes: ....[15] Its Cartan matrix is ${\begin{pmatrix}2&-1&0&\dots &0\\-1&2&-1&\dots &0\\0&-1&2&\dots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &2\end{pmatrix}}.$ Its Weyl group or Coxeter group is the symmetric group Sn, the symmetry group of the (n − 1)-simplex. Generalized special unitary group For a field F, the generalized special unitary group over F, SU(p, q; F), is the group of all linear transformations of determinant 1 of a vector space of rank n = p + q over F which leave invariant a nondegenerate, Hermitian form of signature (p, q). This group is often referred to as the special unitary group of signature p q over F. The field F can be replaced by a commutative ring, in which case the vector space is replaced by a free module. Specifically, fix a Hermitian matrix A of signature p q in $\operatorname {GL} (n,\mathbb {R} )$, then all $M\in \operatorname {SU} (p,q,\mathbb {R} )$ satisfy ${\begin{aligned}M^{*}AM&=A\\\det M&=1.\end{aligned}}$ Often one will see the notation SU(p, q) without reference to a ring or field; in this case, the ring or field being referred to is $\mathbb {C} $ and this gives one of the classical Lie groups. The standard choice for A when $\operatorname {F} =\mathbb {C} $ is $A={\begin{bmatrix}0&0&i\\0&I_{n-2}&0\\-i&0&0\end{bmatrix}}.$ However, there may be better choices for A for certain dimensions which exhibit more behaviour under restriction to subrings of $\mathbb {C} $. Example An important example of this type of group is the Picard modular group $\operatorname {SU} (2,1;\mathbb {Z} [i])$ which acts (projectively) on complex hyperbolic space of degree two, in the same way that $\operatorname {SL} (2,9;\mathbb {Z} )$ acts (projectively) on real hyperbolic space of dimension two. In 2005 Gábor Francsics and Peter Lax computed an explicit fundamental domain for the action of this group on HC2.[16] A further example is $\operatorname {SU} (1,1;\mathbb {C} )$, which is isomorphic to $\operatorname {SL} (2,\mathbb {R} )$. Important subgroups In physics the special unitary group is used to represent bosonic symmetries. In theories of symmetry breaking it is important to be able to find the subgroups of the special unitary group. Subgroups of SU(n) that are important in GUT physics are, for p > 1, n − p > 1, $\operatorname {SU} (n)\supset \operatorname {SU} (p)\times \operatorname {SU} (n-p)\times \operatorname {U} (1),$ where × denotes the direct product and U(1), known as the circle group, is the multiplicative group of all complex numbers with absolute value 1. For completeness, there are also the orthogonal and symplectic subgroups, ${\begin{aligned}\operatorname {SU} (n)&\supset \operatorname {SO} (n),\\\operatorname {SU} (2n)&\supset \operatorname {Sp} (n).\end{aligned}}$ Since the rank of SU(n) is n − 1 and of U(1) is 1, a useful check is that the sum of the ranks of the subgroups is less than or equal to the rank of the original group. SU(n) is a subgroup of various other Lie groups, ${\begin{aligned}\operatorname {SO} (2n)&\supset \operatorname {SU} (n)\\\operatorname {Sp} (n)&\supset \operatorname {SU} (n)\\\operatorname {Spin} (4)&=\operatorname {SU} (2)\times \operatorname {SU} (2)\\\operatorname {E} _{6}&\supset \operatorname {SU} (6)\\\operatorname {E} _{7}&\supset \operatorname {SU} (8)\\\operatorname {G} _{2}&\supset \operatorname {SU} (3)\end{aligned}}$ See spin group, and simple Lie groups for E6, E7, and G2. There are also the accidental isomorphisms: SU(4) = Spin(6), SU(2) = Spin(3) = Sp(1),[lower-alpha 4] and U(1) = Spin(2) = SO(2). One may finally mention that SU(2) is the double covering group of SO(3), a relation that plays an important role in the theory of rotations of 2-spinors in non-relativistic quantum mechanics. The group SU(1,1) $SU(1,1)=\left\{{\begin{pmatrix}u&v\\v^{*}&u^{*}\end{pmatrix}}\in M(2,\mathbb {C} ):uu^{*}-vv^{*}=1\right\}~,~$ where $~u^{*}~$ denotes the complex conjugate of the complex number u. This group is isomorphic to SL(2,ℝ) and Spin(2,1)[17] where the numbers separated by a comma refer to the signature of the quadratic form preserved by the group. The expression $~uu^{*}-vv^{*}~$ in the definition of SU(1,1) is an Hermitian form which becomes an isotropic quadratic form when u and v are expanded with their real components. An early appearance of this group was as the "unit sphere" of coquaternions, introduced by James Cockle in 1852. Let $j={\begin{bmatrix}0&1\\1&0\end{bmatrix}}\,,\quad k={\begin{bmatrix}1&\;~0\\0&-1\end{bmatrix}}\,,\quad i={\begin{bmatrix}\;~0&1\\-1&0\end{bmatrix}}~.$ Then $~j\,k={\begin{bmatrix}0&-1\\1&\;~0\end{bmatrix}}=-i~,~$ $~i\,j\,k=I_{2}\equiv {\begin{bmatrix}1&0\\0&1\end{bmatrix}}~,~$ the 2×2 identity matrix, $~k\,i=j~,$ and $\;i\,j=k\;,$ and the elements i, j, and k all anticommute, as in quaternions. Also $i$ is still a square root of −I2 (negative of the identity matrix), whereas $~j^{2}=k^{2}=+I_{2}~$ are not, unlike in quaternions. For both quaternions and coquaternions, all scalar quantities are treated as implicit multiples of I2  and notated as 1 . The coquaternion $~q=w+x\,i+y\,j+z\,k~$ with scalar w, has conjugate $~q=w-x\,i-y\,j-z\,k~$ similar to Hamilton's quaternions. The quadratic form is $~q\,q^{*}=w^{2}+x^{2}-y^{2}-z^{2}~.$ Note that the 2-sheet hyperboloid $\left\{xi+yj+zk:x^{2}-y^{2}-z^{2}=1\right\}$ corresponds to the imaginary units in the algebra so that any point p on this hyperboloid can be used as a pole of a sinusoidal wave according to Euler's formula. The hyperboloid is stable under SU(1,1), illustrating the isomorphism with Spin(2,1). The variability of the pole of a wave, as noted in studies of polarization, might view elliptical polarization as an exhibit of the elliptical shape of a wave with pole $~p\neq \pm i~$. The Poincaré sphere model used since 1892 has been compared to a 2-sheet hyperboloid model,[18] and the practice of SU (1,1) interferometry has been introduced. When an element of SU(1,1) is interpreted as a Möbius transformation, it leaves the unit disk stable, so this group represents the motions of the Poincaré disk model of hyperbolic plane geometry. Indeed, for a point [z, 1] in the complex projective line, the action of SU(1,1) is given by ${\bigl [}\;z,\;1\;{\bigr ]}\,{\begin{pmatrix}u&v\\v^{*}&u^{*}\end{pmatrix}}=[\;u\,z+v^{*},\,v\,z+u^{*}\;]\,=\,\left[\;{\frac {uz+v^{*}}{vz+u^{*}}},\,1\;\right]$ since in projective coordinates $(\;u\,z+v^{*},\;v\,z+u^{*}\;)\thicksim \left(\;{\frac {\,u\,z+v^{*}\,}{v\,z+u^{*}}},\;1\;\right)~.$ Writing $\;suv+{\overline {suv}}=2\,\Re {\mathord {\bigl (}}\,suv\,{\bigr )}\;,$ complex number arithmetic shows ${\bigl |}u\,z+v^{*}{\bigr |}^{2}=S+z\,z^{*}\quad {\text{ and }}\quad {\bigl |}v\,z+u^{*}{\bigr |}^{2}=S+1~,$ where $~S=v\,v^{*}\left(z\,z^{*}+1\right)+2\,\Re {\mathord {\bigl (}}\,uvz\,{\bigr )}~.$ Therefore, $~z\,z^{*}<1\implies {\bigl |}uz+v^{*}{\bigr |}<{\bigl |}\,v\,z+u^{*}\,{\bigr |}~$ so that their ratio lies in the open disk.[19] See also • Unitary group • Projective special unitary group, PSU(n) • Orthogonal group • Generalizations of Pauli matrices • Representation theory of SU(2) Footnotes 1. For a characterization of U(n) and hence SU(n) in terms of preservation of the standard inner product on $\mathbb {C} ^{n}$, see Classical group. 2. For an explicit description of the homomorphism SU(2) → SO(3), see Connection between SO(3) and SU(2). 3. So fewer than 1⁄6 of all fabcs are non-vanishing. 4. Sp(n) is the compact real form of $\operatorname {Sp} (2n,\mathbb {C} )$. It is sometimes denoted USp(2n). The dimension of the Sp(n)-matrices is 2n × 2n. Citations 1. Halzen, Francis; Martin, Alan (1984). Quarks & Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 0-471-88741-2. 2. Hall 2015, Proposition 13.11 3. Wybourne, B.G. (1974). Classical Groups for Physicists. Wiley-Interscience. ISBN 0471965057. 4. Hall 2015 Proposition 3.24 5. Hall 2015 Exercise 1.5 6. Savage, Alistair. "LieGroups" (PDF). MATH 4144 notes. 7. Hall 2015 Proposition 3.24 8. Hall 2015 Proposition 13.11 9. Hall 2015 Section 13.2 10. Hall 2015 Chapter 6 11. Rosen, S P (1971). "Finite Transformations in Various Representations of SU(3)". Journal of Mathematical Physics. 12 (4): 673–681. Bibcode:1971JMP....12..673R. doi:10.1063/1.1665634.; Curtright, T L; Zachos, C K (2015). "Elementary results for the fundamental representation of SU(3)". Reports on Mathematical Physics. 76 (3): 401–404. arXiv:1508.00868. Bibcode:2015RpMP...76..401C. doi:10.1016/S0034-4877(15)30040-9. S2CID 119679825. 12. Hall 2015 Proposition 3.24 13. Hall 2015 Section 3.6 14. Hall 2015 Section 7.7.1 15. Hall 2015 Section 8.10.1 16. Francsics, Gabor; Lax, Peter D. (September 2005). "An explicit fundamental domain for the Picard modular group in two complex dimensions". arXiv:math/0509708. 17. Gilmore, Robert (1974). Lie Groups, Lie Algebras and some of their Applications. John Wiley & Sons. pp. 52, 201−205. MR 1275599. 18. Mota, R.D.; Ojeda-Guillén, D.; Salazar-Ramírez, M.; Granados, V.D. (2016). "SU(1,1) approach to Stokes parameters and the theory of light polarization". Journal of the Optical Society of America B. 33 (8): 1696–1701. arXiv:1602.03223. Bibcode:2016JOSAB..33.1696M. doi:10.1364/JOSAB.33.001696. S2CID 119146980. 19. Siegel, C.L. (1971). Topics in Complex Function Theory. Vol. 2. Translated by Shenitzer, A.; Tretkoff, M. Wiley-Interscience. pp. 13–15. ISBN 0-471-79080 X. References • Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 • Iachello, Francesco (2006), Lie Algebras and Applications, Lecture Notes in Physics, vol. 708, Springer, ISBN 3540362363
Wikipedia
Three utilities problem The classical mathematical puzzle known as the three utilities problem or sometimes water, gas and electricity asks for non-crossing connections to be drawn between three houses and three utility companies in the plane. When posing it in the early 20th century, Henry Dudeney wrote that it was already an old problem. It is an impossible puzzle: it is not possible to connect all nine lines without crossing. Versions of the problem on nonplanar surfaces such as a torus or Möbius strip, or that allow connections to pass through other houses or utilities, can be solved. Two views of the utility graph, also known as the Thomsen graph or $K_{3,3}$ This puzzle can be formalized as a problem in topological graph theory by asking whether the complete bipartite graph $K_{3,3}$, with vertices representing the houses and utilities and edges representing their connections, has a graph embedding in the plane. The impossibility of the puzzle corresponds to the fact that $K_{3,3}$ is not a planar graph. Multiple proofs of this impossibility are known, and form part of the proof of Kuratowski's theorem characterizing planar graphs by two forbidden subgraphs, one of which is $K_{3,3}$. The question of minimizing the number of crossings in drawings of complete bipartite graphs is known as Turán's brick factory problem, and for $K_{3,3}$ the minimum number of crossings is one. $K_{3,3}$ is a graph with six vertices and nine edges, often referred to as the utility graph in reference to the problem.[1] It has also been called the Thomsen graph after 19th-century chemist Julius Thomsen. It is a well-covered graph, the smallest triangle-free cubic graph, and the smallest non-planar minimally rigid graph. History A review of the history of the three utilities problem is given by Kullman (1979). He states that most published references to the problem characterize it as "very ancient".[2] In the earliest publication found by Kullman, Henry Dudeney (1917) names it "water, gas, and electricity". However, Dudeney states that the problem is "as old as the hills...much older than electric lighting, or even gas".[3] Dudeney also published the same puzzle previously, in The Strand Magazine in 1913.[4] A competing claim of priority goes to Sam Loyd, who was quoted by his son in a posthumous biography as having published the problem in 1900.[5] Another early version of the problem involves connecting three houses to three wells.[6] It is stated similarly to a different (and solvable) puzzle that also involves three houses and three fountains, with all three fountains and one house touching a rectangular wall; the puzzle again involves making non-crossing connections, but only between three designated pairs of houses and wells or fountains, as in modern numberlink puzzles.[7] Loyd's puzzle "The Quarrelsome Neighbors" similarly involves connecting three houses to three gates by three non-crossing paths (rather than nine as in the utilities problem); one house and the three gates are on the wall of a rectangular yard, which contains the other two houses within it.[8] As well as in the three utilities problem, the graph $K_{3,3}$ appears in late 19th-century and early 20th-century publications both in early studies of structural rigidity[9][10] and in chemical graph theory, where Julius Thomsen proposed it in 1886 for the then-uncertain structure of benzene.[11] In honor of Thomsen's work, $K_{3,3}$ is sometimes called the Thomsen graph.[12] Statement The three utilities problem can be stated as follows: Suppose three houses each need to be connected to the water, gas, and electricity companies, with a separate line from each house to each company. Is there a way to make all nine connections without any of the lines crossing each other? The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing.[13][14] In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph $K_{3,3}$ is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle.[13][14] Puzzle solutions Unsolvability As it is usually presented (on a flat two-dimensional plane), the solution to the utility puzzle is "no": there is no way to make all nine connections without any of the lines crossing each other. In other words, the graph $K_{3,3}$ is not planar. Kazimierz Kuratowski stated in 1930 that $K_{3,3}$ is nonplanar,[15] from which it follows that the problem has no solution. Kullman (1979), however, states that "Interestingly enough, Kuratowski did not publish a detailed proof that [ $K_{3,3}$ ] is non-planar".[2] One proof of the impossibility of finding a planar embedding of $K_{3,3}$ uses a case analysis involving the Jordan curve theorem.[16] In this solution, one examines different possibilities for the locations of the vertices with respect to the 4-cycles of the graph and shows that they are all inconsistent with a planar embedding.[17] Alternatively, it is possible to show that any bridgeless bipartite planar graph with $V$ vertices and $E$ edges has $E\leq 2V-4$ by combining the Euler formula $V-E+F=2$ (where $F$ is the number of faces of a planar embedding) with the observation that the number of faces is at most half the number of edges (the vertices around each face must alternate between houses and utilities, so each face has at least four edges, and each edge belongs to exactly two faces). In the utility graph, $E=9$ and $2V-4=8$ so in the utility graph it is untrue that $E\leq 2V-4$. Because it does not satisfy this inequality, the utility graph cannot be planar.[18] Changing the rules Solution on a Möbius strip Solution on a torus A torus allows up to 4 utilities and 4 houses $K_{3,3}$ is a toroidal graph, which means that it can be embedded without crossings on a torus, a surface of genus one.[19] These embeddings solve versions of the puzzle in which the houses and companies are drawn on a coffee mug or other such surface instead of a flat plane.[20] There is even enough additional freedom on the torus to solve a version of the puzzle with four houses and four utilities.[21][5] Similarly, if the three utilities puzzle is presented on a sheet of a transparent material, it may be solved after twisting and gluing the sheet to form a Möbius strip.[22] Another way of changing the rules of the puzzle that would make it solvable, suggested by Henry Dudeney, is to allow utility lines to pass through other houses or utilities than the ones they connect.[3] Properties of the utility graph Beyond the utility puzzle, the same graph $K_{3,3}$ comes up in several other mathematical contexts, including rigidity theory, the classification of cages and well-covered graphs, the study of graph crossing numbers, and the theory of graph minors. Rigidity The utility graph $K_{3,3}$ is a Laman graph, meaning that for almost all placements of its vertices in the plane, there is no way to continuously move its vertices while preserving all edge lengths, other than by a rigid motion of the whole plane, and that none of its spanning subgraphs have the same rigidity property. It is the smallest example of a nonplanar Laman graph.[23] Despite being a minimally rigid graph, it has non-rigid embeddings with special placements for its vertices.[9][24] For general-position embeddings, a polynomial equation describing all possible placements with the same edge lengths has degree 16, meaning that in general there can be at most 16 placements with the same lengths. It is possible to find systems of edge lengths for which up to eight of the solutions to this equation describe realizable placements.[24] Other graph-theoretic properties $K_{3,3}$ is a triangle-free graph, in which every vertex has exactly three neighbors (a cubic graph). Among all such graphs, it is the smallest. Therefore, it is the (3,4)-cage, the smallest graph that has three neighbors per vertex and in which the shortest cycle has length four.[25] Like all other complete bipartite graphs, it is a well-covered graph, meaning that every maximal independent set has the same size. In this graph, the only two maximal independent sets are the two sides of the bipartition, and are of equal sizes. $K_{3,3}$ is one of only seven 3-regular 3-connected well-covered graphs.[26] Generalizations Two important characterizations of planar graphs, Kuratowski's theorem that the planar graphs are exactly the graphs that contain neither $K_{3,3}$ nor the complete graph $K_{5}$ as a subdivision, and Wagner's theorem that the planar graphs are exactly the graphs that contain neither $K_{3,3}$ nor $K_{5}$ as a minor, make use of and generalize the non-planarity of $K_{3,3}$.[27] Pál Turán's "brick factory problem" asks more generally for a formula for the minimum number of crossings in a drawing of the complete bipartite graph $K_{a,b}$ in terms of the numbers of vertices $a$ and $b$ on the two sides of the bipartition. The utility graph $K_{3,3}$ may be drawn with only one crossing, but not with zero crossings, so its crossing number is one.[5][28] References 1. Gries, David; Schneider, Fred B. (1993), "Chapter 19: A theory of graphs", A Logical Approach to Discrete Math, New York: Springer, pp. 423–460, doi:10.1007/978-1-4757-3837-7, ISBN 978-1-4419-2835-1, S2CID 206657798. See p. 437: "$K_{3,3}$ is known as the utility graph". 2. Kullman, David (1979), "The utilities problem", Mathematics Magazine, 52 (5): 299–302, doi:10.1080/0025570X.1979.11976807, JSTOR 2689782 3. Dudeney, Henry (1917), "Problem 251 – Water, Gas, and Electricity", Amusements in mathematics, vol. 100, Thomas Nelson, p. 73, Bibcode:1917Natur.100..302., doi:10.1038/100302a0, S2CID 10245524. The solution given on pp. 200–201 involves passing a line through one of the other houses. 4. Dudeney, Henry (1913), "Perplexities, with some easy puzzles for beginners", The Strand Magazine, vol. 46, p. 110 5. Beineke, Lowell; Wilson, Robin (2010), "The early history of the brick factory problem", The Mathematical Intelligencer, 32 (2): 41–48, doi:10.1007/s00283-009-9120-4, MR 2657999, S2CID 122588849 6. "Puzzle", Successful Farming, vol. 13, p. 50, 1914; "A well and house puzzle", The Youth's Companion, vol. 90, no. 2, p. 392, 1916. 7. "32. The fountain puzzle", The Magician's Own Book, Or, The Whole Art of Conjuring, New York: Dick & Fitzgerald, 1857, p. 276 8. Loyd, Sam (1959), "82: The Quarrelsome Neighbors", in Gardner, Martin (ed.), Mathematical Puzzles of Sam Loyd, Dover Books, p. 79, ISBN 9780486204987 9. Dixon, A. C. (1899), "On certain deformable frameworks", Messenger of Mathematics, 29: 1–21, JFM 30.0622.02 10. Henneberg, L. (1908), "Die graphische Statik der starren Körper", Encyklopädie der Mathematischen Wissenschaften, vol. 4, pp. 345–434. See in particular p. 403. 11. Thomsen, Julius (July 1886), "Die Constitution des Benzols" (PDF), Berichte der Deutschen Chemischen Gesellschaft, 19 (2): 2944–2950, doi:10.1002/cber.188601902285 12. Bollobás, Béla (1998), Modern Graph Theory, Graduate Texts in Mathematics, vol. 184, Springer-Verlag, New York, p. 23, doi:10.1007/978-1-4612-0619-4, ISBN 0-387-98488-7, MR 1633290 13. Harary, Frank (1960), "Some historical and intuitive aspects of graph theory", SIAM Review, 2 (2): 123–131, Bibcode:1960SIAMR...2..123H, doi:10.1137/1002023, MR 0111698 14. Bóna, Miklós (2011), A Walk Through Combinatorics: An Introduction to Enumeration and Graph Theory, World Scientific, pp. 275–277, ISBN 9789814335232. Bóna introduces the puzzle (in the form of three houses to be connected to three wells) on p. 275, and writes on p. 277 that it "is equivalent to the problem of drawing $K_{3,3}$ on a plane surface without crossings". 15. Kuratowski, Kazimierz (1930), "Sur le problème des courbes gauches en topologie" (PDF), Fundamenta Mathematicae (in French), 15: 271–283, doi:10.4064/fm-15-1-271-283 16. Ayres, W. L. (1938), "Some elementary aspects of topology", The American Mathematical Monthly, 45 (2): 88–92, doi:10.1080/00029890.1938.11990773, JSTOR 2304276, MR 1524194 17. Trudeau, Richard J. (1993), Introduction to Graph Theory, Dover Books on Mathematics, New York: Dover Publications, pp. 68–70, ISBN 978-0-486-67870-2 18. Kappraff, Jay (2001), Connections: The Geometric Bridge Between Art and Science, K & E Series on Knots and Everything, vol. 25, World Scientific, p. 128, ISBN 9789810245863 19. Harary, F. (1964), "Recent results in topological graph theory", Acta Mathematica, 15 (3–4): 405–411, doi:10.1007/BF01897149, hdl:2027.42/41775, MR 0166775, S2CID 123170864; see p. 409. 20. Parker, Matt (2015), Things to Make and Do in the Fourth Dimension: A Mathematician's Journey Through Narcissistic Numbers, Optimal Dating Algorithms, at Least Two Kinds of Infinity, and More, New York: Farrar, Straus and Giroux, pp. 180–181, 191–192, ISBN 978-0-374-53563-6, MR 3753642 21. O’Beirne, T. H. (December 21, 1961), "Christmas puzzles and paradoxes, 51: For boys, men and heroes", New Scientist, vol. 12, no. 266, pp. 751–753 22. Larsen, Mogens Esrom (1994), "Misunderstanding my mazy mazes may make me miserable", in Guy, Richard K.; Woodrow, Robert E. (eds.), Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History held at the University of Calgary, Calgary, Alberta, August 1986, MAA Spectrum, Washington, DC: Mathematical Association of America, pp. 289–293, ISBN 0-88385-516-X, MR 1303141. See Figure 7, p. 292. 23. Streinu, Ileana (2005), "Pseudo-triangulations, rigidity and motion planning", Discrete & Computational Geometry, 34 (4): 587–635, doi:10.1007/s00454-005-1184-0, MR 2173930, S2CID 25281202. See p. 600: "Not all generically minimally rigid graphs have embeddings as pseudo-triangulations, because not all are planar graphs. The smallest example is $K_{3,3}$". 24. Walter, D.; Husty, M. L. (2007), "On a nine-bar linkage, its possible configurations and conditions for paradoxical mobility" (PDF), in Merlet, Jean-Pierre; Dahan, Marc (eds.), 12th World Congress on Mechanism and Machine Science (IFToMM 2007), International Federation for the Promotion of Mechanism and Machine Science 25. Tutte, W. T. (1947), "A family of cubical graphs", Proceedings of the Cambridge Philosophical Society, 43 (4): 459–474, Bibcode:1947PCPS...43..459T, doi:10.1017/s0305004100023720, MR 0021678, S2CID 123505185 26. Campbell, S. R.; Ellingham, M. N.; Royle, Gordon F. (1993), "A characterisation of well-covered cubic graphs", Journal of Combinatorial Mathematics and Combinatorial Computing, 13: 193–212, MR 1220613 27. Little, Charles H. C. (1976), "A theorem on planar graphs", in Casse, Louis R. A.; Wallis, Walter D. (eds.), Combinatorial Mathematics IV: Proceedings of the Fourth Australian Conference Held at the University of Adelaide August 27–29, 1975, Lecture Notes in Mathematics, vol. 560, Springer, pp. 136–141, doi:10.1007/BFb0097375, MR 0427121 28. Pach, János; Sharir, Micha (2009), "5.1 Crossings—the Brick Factory Problem", Combinatorial Geometry and Its Algorithmic Applications: The Alcalá Lectures, Mathematical Surveys and Monographs, vol. 152, American Mathematical Society, pp. 126–127 External links • 3 Utilities Puzzle at Cut-the-knot • The Utilities Puzzle explained and "solved" at Archimedes-lab.org • Weisstein, Eric W., "Utility graph", MathWorld
Wikipedia
Suanpan The suanpan (simplified Chinese: 算盘; traditional Chinese: 算盤; pinyin: suànpán), also spelled suan pan or souanpan[1][2]) is an abacus of Chinese origin first described in a 190 CE book of the Eastern Han Dynasty, namely Supplementary Notes on the Art of Figures written by Xu Yue. However, the exact design of this suanpan is not known.[3] Usually, a suanpan is about 20 cm (8 in) tall and it comes in various widths depending on the application. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads on each rod in the bottom deck. The beads are usually rounded and made of a hardwood. The beads are counted by moving them up or down towards the beam. The suanpan can be reset to the starting position instantly by a quick jerk around the horizontal axis to spin all the beads away from the horizontal beam at the center. Suanpans can be used for functions other than counting. Unlike the simple counting board used in elementary schools, very efficient suanpan techniques have been developed to do multiplication, division, addition, subtraction, square root and cube root operations at high speed. The modern suanpan has 4+1 beads, colored beads to indicate position and a clear-all button. When the clear-all button is pressed, two mechanical levers push the top row beads to the top position and the bottom row beads to the bottom position, thus clearing all numbers to zero. This replaces clearing the beads by hand, or quickly rotating the suanpan around its horizontal center line to clear the beads by centrifugal force. History The word "abacus" was first mentioned by Xu Yue (160–220) in his book suanshu jiyi (算数记遗), or Notes on Traditions of Arithmetic Methods, in Han Dynasty. As it described, the original abacus had five beads (suan zhu) bunched by a stick in each column, separated by a transverse rod, and arrayed in a wooden rectangle box. One in the upper part represents five and each of four in the lower part represents one. People move the beads to do the calculation. The long scroll Along the River During Qing Ming Festival painted by Zhang Zeduan (1085–1145) during the Song Dynasty (960-1279) might contain a suanpan beside an account book and doctor's prescriptions on the counter of an apothecary. However, the identification of the object as an abacus is a matter of some debate.[4] A 5+1 suanpan appeared in Ming dynasty, an illustration in a 1573 book on suanpan showed a suanpan with one bead on top and five beads at the bottom. The evident similarity of the Roman abacus to the Chinese one suggests that one must have inspired the other, as there is strong evidence of a trade relationship between the Roman Empire and China. However, no direct connection can be demonstrated, and the similarity of the abaci could be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model and Chinese model (like most modern Japanese) has 4 plus 1 bead per decimal place, the old version of the Chinese suanpan has 5 plus 2, allowing less challenging arithmetic algorithms. Instead of running on wires as in the Chinese and Japanese models, the beads of Roman model run in grooves, presumably more reliable since the wires could be bent. Another possible source of the suanpan is Chinese counting rods, which operated with a place value decimal system with empty spot as zero. Beads There are two types of beads on the suanpan, those in the lower deck, below the separator beam, and those in the upper deck above it. The ones in the lower deck are sometimes called earth beads or water beads, and carry a value of 1 in their column. The ones in the upper deck are sometimes called heaven beads and carry a value of 5 in their column. The columns are much like the places in Indian numerals: one of the columns, usually the rightmost, represents the ones place; to the left of it are the tens, hundreds, thousands place, and so on, and if there are any columns to the right of it, they are the tenths place, hundredths place, and so on. The suanpan is a 2:5 abacus: two heaven beads and five earth beads. If one compares the suanpan to the soroban which is a 1:4 abacus, one might think there are two "extra" beads in each column. In fact, to represent decimal numbers and add or subtract such numbers, one strictly needs only one upper bead and four lower beads on each column. Some "old" methods to multiply or divide decimal numbers use those extra beads like the "Extra Bead technique" or "Suspended Bead technique".[5] The most mysterious and seemingly superfluous fifth lower bead, likely inherited from counting rods as suggested by the image above, was used to simplify and speed up addition and subtraction somewhat, as well as to decrease the chances of error.[6] Its use was demonstrated, for example, in the first book devoted entirely to suanpan: Computational Methods with the Beads in a Tray (Pánzhū Suànfǎ 盤珠算法) by Xú Xīnlǔ 徐心魯 (1573, Late Ming Dynasty).[7] The following two animations show the details of this particular usage:[8] • Use of the 5th lower bead in addition according to the Panzhu Suanfa • Use of the 5th lower bead in subtraction according to the Panzhu Suanfa The beads and rods are often lubricated to ensure quick, smooth motion. Calculating on a suanpan At the end of a decimal calculation on a suanpan, it is never the case that all five beads in the lower deck are moved up; in this case, the five beads are pushed back down and one carry bead in the top deck takes their place. Similarly, if two beads in the top deck are pushed down, they are pushed back up, and one carry bead in the lower deck of the next column to the left is moved up. The result of the computation is read off from the beads clustered near the separator beam between the upper and lower deck. Division There exist different methods to perform division on the suanpan. Some of them require the use of the so-called "Chinese division table".[9] Chinese Division Table 一 1 二 2 三 3 四 4 五 5 六 6 七 7 八 8 九 9 一 1 进一 advance 1 Cycle repeats 二 2 添作五 replace by 5 进一 advance 1 Cycle repeats 三 3 三十一 31 六十二 62 进一 advance 1 Cycle repeats 四 4 二十二 22 添作五 replace by 5 七十二 72 进一 advance 1 Cycle repeats 五 5 添作二 replace by 2 添作四 replace by 4 添作六 replace by 6 添作八 replace by 8 进一 advance 1 Cycle repeats 六 6 下加四 below add 4 三十二 32 添作五 replace by 5 六十四 64 八十二 82 进一 advance 1 Cycle repeats 七 7 下加三 below add 3 下加六 below add 6 四十二 42 五十五 55 七十一 71 八十四 84 进一 advance 1 Cycle repeats 八 8 下加二 below add 2 下加四 below add 4 下加六 below add 6 添作五 replace by 5 六十二 62 七十四 74 八十六 86 进一 advance 1 Cycle repeats 九 9 下加一 below add 1 下加二 below add 2 下加三 below add 3 下加四 below add 4 下加五 below add 5 下加六 below add 6 下加七 below add 7 下加八 below add 8 进一 advance 1 The two most extreme beads, the bottommost earth bead and the topmost heaven bead, are usually not used in addition and subtraction. They are essential (compulsory) in some of the multiplication methods (two of three methods require them) and division method (special division table, Qiuchu 九歸, one amongst three methods). When the intermediate result (in multiplication and division) is larger than 15 (fifteen), the second (extra) upper bead is moved halfway to represent ten (xuanchu, suspended). Thus the same rod can represent up to 20 (compulsory as intermediate steps in traditional suanpan multiplication and division). The mnemonics/readings of the Chinese division method [Qiuchu] has its origin in the use of bamboo sticks [Chousuan], which is one of the reasons that many believe the evolution of suanpan is independent of the Roman abacus. This Chinese division method (i.e. with division table) was not in use when the Japanese changed their abacus to one upper bead and four lower beads in about the 1920s. Decimal system This 4+1 abacus works as a bi-quinary based number system (the 5+2 abacus is similar but not identical to bi-quinary) in which carries and shifting are similar to the decimal number system. Since each rod represents a digit in a decimal number, the computation capacity of the suanpan is only limited by the number of rods on the suanpan. When a mathematician runs out of rods, another suanpan can be added to the left of the first. In theory, the suanpan can be expanded indefinitely in this way. Modern usage Suanpan arithmetic was still being taught in school in Hong Kong as recently as the late 1960s, and in China into the 1990s. In some less-developed industry, the suanpan (abacus) is still in use as a primary counting device and back-up calculating method. However, when handheld calculators became readily available, school children's willingness to learn the use of the suanpan decreased dramatically. In the early days of handheld calculators, news of suanpan operators beating electronic calculators in arithmetic competitions in both speed and accuracy often appeared in the media. Early electronic calculators could only handle 8 to 10 significant digits, whereas suanpans can be built to virtually limitless precision. But when the functionality of calculators improved beyond simple arithmetic operations, most people realized that the suanpan could never compute higher functions – such as those in trigonometry – faster than a calculator. As digitalised calculators seemed to be more efficient and user-friendly, their functional capacities attract more technological-related and large scale industries in application. Nowadays, even though calculators have become more affordable and convenient, suanpans are still commonly used in China. Many parents still tend to send their children to private tutors or school- and government-sponsored after school activities to learn bead arithmetic as a learning aid and a stepping stone to faster and more accurate mental arithmetic, or as a matter of cultural preservation. Speed competitions are still held. In 2013, the United Nations Educational, Scientific and Cultural Organization has inscribed Suanpans on the Representative List of the Intangible Cultural Heritage of Humanity.[10] This has included the Chinese Zhusuan, knowledge and practices of mathematical calculation through the Suanpans (abacus). Zhusuan is named after the Chinese name of abacus, which has been recognised as one of the Fifth Great Innovation in China[11] Suanpans and Zhusuan are still widely used elsewhere in China and Japan, as well as in a few places in Canada and the United States. With its historical value, it has symbolized the traditional cultural identity. It contributes to the advancement of calculating techniques and intellectual development, which closely relate to the cultural-related industry like architecture and folk customs. With their operational simplicity and traditional habit, Suanpans are still generally in use in small-scale shops. In mainland China, formerly accountants and financial personnel had to pass certain graded examinations in bead arithmetic before they were qualified. Starting from about 2002 or 2004, this requirement has been entirely replaced by computer accounting. Notes 1. Schmid, Hermann (1974). Decimal Computation (1 ed.). Binghamton, New York, USA: John Wiley & Sons. ISBN 0-471-76180-X. 2. Schmid, Hermann (1983) [1974]. Decimal Computation (1 (reprint) ed.). Malabar, Florida, USA: Robert E. Krieger Publishing Company. ISBN 0-89874-318-4. 3. Peng Yoke Ho, page 71 4. Martzloff, p. 216 5. "算盤 Traditional Multiplication Techniques for Chinese Abacus - Chinese Suan Pan". Webhome.idirect.com. Retrieved 2013-03-26. 6. Chen, Yifu (2018). "The Education of Abacus Addition in China and Japan Prior to the Early 20th Century". Computations and Computing Devices in Mathematics Education Before the Advent of Electronic Calculators. Mathematics Education in the Digital Era. Vol. 11. Springer. pp. 243–. doi:10.1007/978-3-319-73396-8_10. ISBN 978-3-319-73396-8. 7. Suzuki, Hisao (1982). "Zhusuan addition and subtraction methods in China". Kokushikan University School of Political Science and Economics (in Japanese). 57 – via Kokushikan. 8. "A short guide to the 5th lower bead" (PDF). 算盤 Abacus: Mystery of the bead. 2020. Archived (PDF) from the original on 2021-01-26. Retrieved 2021-07-07. 9. "算盤 Short Division on a Chinese Abacus - Chinese Suan Pan". Webhome.idirect.com. Retrieved 2013-03-26. 10. "UNESCO - Chinese Zhusuan, knowledge and practices of mathematical calculation through the abacus". ich.unesco.org. Retrieved 2021-03-09. 11. Shi, Jiaxin; Chen, Jianfei; Zhu, Jun; Sun, Shengyang; Luo, Yucen; Gu, Yihong; Zhou, Yuhao (2017). "ZhuSuan: A Library for Bayesian Deep Learning". arXiv:1709.05870 [stat.ML]. See also • Abacus • Counting rods • Soroban References • Peng Yoke Ho (2000). Li, Qi and Shu: An Introduction to Science and Civilization in China. Courier Dover Publications. ISBN 0-486-41445-0. • Martzloff (2006). A History of Chinese Mathematics. Springer-Verlag. ISBN 3-540-33782-2. External links • Suanpan Tutor - See the steps in addition and subtraction • A Traditional Suan Pan Technique for Multiplication • Hex to Suanpan
Wikipedia
Sub-Stonean space In topology, a sub-Stonean space is a locally compact Hausdorff space such that any two open σ-compact disjoint subsets have disjoint compact closures. Related, an F-space, introduced by Gillman & Henriksen (1956), is a completely regular Hausdorff space for which every finitely generated ideal of the ring of real-valued continuous functions is principal, or equivalently every real-valued continuous function $f$ can be written as $f=g|f|$ for some real-valued continuous function $g$. When dealing with compact spaces, the two concepts are the same, but in general, the concepts are different. The relationship between the sub-Stonean spaces and F-spaces is studied in Henriksen and Woods, 1989. Examples Rickart spaces and the corona sets of locally compact σ-compact Hausdorff spaces are sub-Stonean spaces. See also • Extremally disconnected space • F-space References • Gillman, Leonard; Henriksen, Melvin (1956), "Rings of continuous functions in which every finitely generated ideal is principal", Transactions of the American Mathematical Society, 82 (2): 366–391, doi:10.2307/1993054, ISSN 0002-9947, JSTOR 1993054, MR 0078980 • Grove, Karsten; Pedersen, Gert Kjærgård (1984), "Sub-Stonean spaces and corona sets", Journal of Functional Analysis, 56 (1): 124–143, doi:10.1016/0022-1236(84)90028-4, ISSN 0022-1236, MR 0735707 • Henriksen, Melvin; Woods, R. G. (1989), "F-Spaces and Substonean Spaces: General Topology as a Tool in Functional Analysis", Annals of the New York Academy of Sciences, 552 (1 Papers on General topology and related category theory and topological algebra): 60–68, doi:10.1111/j.1749-6632.1989.tb22386.x, ISSN 1749-6632, MR 1020774
Wikipedia
Subbase In topology, a subbase (or subbasis, prebase, prebasis) for a topological space $X$ with topology $\tau $ is a subcollection $B$ of $\tau $ that generates $\tau ,$ in the sense that $\tau $ is the smallest topology containing $B$ as open sets. A slightly different definition is used by some authors, and there are other useful equivalent formulations of the definition; these are discussed below. Definition Let $X$ be a topological space with topology $\tau .$ A subbase of $\tau $ is usually defined as a subcollection $B$ of $\tau $ satisfying one of the two following equivalent conditions: 1. The subcollection $B$ generates the topology $\tau .$ This means that $\tau $ is the smallest topology containing $B$: any topology $\tau ^{\prime }$ on $X$ containing $B$ must also contain $\tau .$ 2. The collection of open sets consisting of all finite intersections of elements of $B,$ together with the set $X,$ forms a basis for $\tau .$[1] This means that every proper open set in $\tau $ can be written as a union of finite intersections of elements of $B.$ Explicitly, given a point $x$ in an open set $U\subsetneq X,$ there are finitely many sets $S_{1},\ldots ,S_{n}$ of $B,$ such that the intersection of these sets contains $x$ and is contained in $U.$ (If we use the nullary intersection convention, then there is no need to include $X$ in the second definition.) For any subcollection $S$ of the power set $\wp (X),$ there is a unique topology having $S$ as a subbase. In particular, the intersection of all topologies on $X$ containing $S$ satisfies this condition. In general, however, there is no unique subbasis for a given topology. Thus, we can start with a fixed topology and find subbases for that topology, and we can also start with an arbitrary subcollection of the power set $\wp (X)$ and form the topology generated by that subcollection. We can freely use either equivalent definition above; indeed, in many cases, one of the two conditions is more useful than the other. Alternative definition Less commonly, a slightly different definition of subbase is given which requires that the subbase ${\mathcal {B}}$ cover $X.$[2] In this case, $X$ is the union of all sets contained in ${\mathcal {B}}.$ This means that there can be no confusion regarding the use of nullary intersections in the definition. However, this definition is not always equivalent to the two definitions above. In other words, there exist topological spaces $(X,\tau )$ with a subset ${\mathcal {B}}\subseteq \tau ,$ such that $\tau $ is the smallest topology containing ${\mathcal {B}},$ yet ${\mathcal {B}}$ does not cover $X$ (such an example is given below). In practice, this is a rare occurrence; e.g. a subbase of a space that has at least two points and satisfies the T1 separation axiom must be a cover of that space, but as seen below, if you want to prove Alexander Subbase Theorem,[3] you should assume that ${\mathcal {B}}$ cover $X.$ Examples The topology generated by any subset ${\mathcal {S}}\subseteq \{\varnothing ,X\}$ (including by the empty set ${\mathcal {S}}:=\varnothing $) is equal to the trivial topology $\{\varnothing ,X\}.$ If $\tau $ is a topology on $X$ and ${\mathcal {B}}$ is a basis for $\tau $ then the topology generated by ${\mathcal {B}}$ is $\tau .$ Thus any basis ${\mathcal {B}}$ for a topology $\tau $ is also a subbasis for $\tau .$ If ${\mathcal {S}}$ is any subset of $\tau $ then the topology generated by ${\mathcal {S}}$ will be a subset of $\tau .$ The usual topology on the real numbers $\mathbb {R} $ has a subbase consisting of all semi-infinite open intervals either of the form $(-\infty ,a)$ or $(b,\infty ),$ where $a$ and $b$ are real numbers. Together, these generate the usual topology, since the intersections $(a,b)=(-\infty ,b)\cap (a,\infty )$ for $a\leq b$ generate the usual topology. A second subbase is formed by taking the subfamily where $a$ and $b$ are rational. The second subbase generates the usual topology as well, since the open intervals $(a,b)$ with $a,$ $b$ rational, are a basis for the usual Euclidean topology. The subbase consisting of all semi-infinite open intervals of the form $(-\infty ,a)$ alone, where $a$ is a real number, does not generate the usual topology. The resulting topology does not satisfy the T1 separation axiom, since if $a<b$ every open set containing $b$ also contains $a.$ The initial topology on $X$ defined by a family of functions $f_{i}:X\to Y_{i},$ where each $Y_{i}$ has a topology, is the coarsest topology on $X$ such that each $f_{i}$ is continuous. Because continuity can be defined in terms of the inverse images of open sets, this means that the initial topology on $X$ is given by taking all $f_{i}^{-1}(U),$ where $U$ ranges over all open subsets of $Y_{i},$ as a subbasis. Two important special cases of the initial topology are the product topology, where the family of functions is the set of projections from the product to each factor, and the subspace topology, where the family consists of just one function, the inclusion map. The compact-open topology on the space of continuous functions from $X$ to $Y$ has for a subbase the set of functions $V(K,U)=\{f:X\to Y\mid f(K)\subseteq U\}$ where $K\subseteq X$ is compact and $U$ is an open subset of $Y.$ Suppose that $(X,\tau )$ is a Hausdorff topological space with $X$ containing two or more elements (for example, $X=\mathbb {R} $ with the Euclidean topology). Let $Y\in \tau $ be any non-empty open subset of $(X,\tau )$ (for example, $Y$ could be a non-empty bounded open interval in $\mathbb {R} $) and let $\nu $ denote the subspace topology on $Y$ that $Y$ inherits from $(X,\tau )$ (so $\nu \subseteq \tau $). Then the topology generated by $\nu $ on $X$ is equal to the union $\{X\}\cup \nu $ (see this footnote for an explanation),[note 1] where $\{X\}\cup \nu \subseteq \tau $ (since $(X,\tau )$ is Hausdorff, equality will hold if and only if $Y=X$). Note that if $Y$ is a proper subset of $X,$ then $\{X\}\cup \nu $ is the smallest topology on $X$ containing $\nu $ yet $\nu $ does not cover $X$ (that is, the union $\bigcup _{V\in \nu }V=Y$ is a proper subset of $X$). Results using subbases One nice fact about subbases is that continuity of a function need only be checked on a subbase of the range. That is, if $f:X\to Y$ is a map between topological spaces and if ${\mathcal {B}}$ is a subbase for $Y,$ then $f:X\to Y$ is continuous if and only if $f^{-1}(B)$ is open in $X$ for every $B\in {\mathcal {B}}.$ A net (or sequence) $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ converges to a point $x$ if and only if every subbasic neighborhood of $x$ contains all $x_{i}$ for sufficiently large $i\in I.$ Alexander subbase theorem The Alexander Subbase Theorem is a significant result concerning subbases that is due to James Waddell Alexander II.[3] The corresponding result for basic (rather than subbasic) open covers is much easier to prove. Alexander Subbase Theorem:[3][1] Let $(X,\tau )$ be a topological space. If $X$ has a subbasis ${\mathcal {S}}$ such that every cover of $X$ by elements from ${\mathcal {S}}$ has a finite subcover, then $X$ is compact. The converse to this theorem also holds and it is proven by using ${\mathcal {S}}=\tau $ (since every topology is a subbasis for itself). If $X$ is compact and ${\mathcal {S}}$ is a subbasis for $X,$ every cover of $X$ by elements from ${\mathcal {S}}$ has a finite subcover. Proof Suppose for the sake of contradiction that the space $X$ is not compact (so $X$ is an infinite set), yet every subbasic cover from ${\mathcal {S}}$ has a finite subcover. Let $\mathbb {S} $ denote the set of all open covers of $X$ that do not have any finite subcover of $X.$ Partially order $\mathbb {S} $ by subset inclusion and use Zorn's Lemma to find an element ${\mathcal {C}}\in \mathbb {S} $ that is a maximal element of $\mathbb {S} .$ Observe that: 1. Since ${\mathcal {C}}\in \mathbb {S} ,$ by definition of $\mathbb {S} ,$ ${\mathcal {C}}$ is an open cover of $X$ and there does not exist any finite subset of ${\mathcal {C}}$ that covers $X$ (so in particular, ${\mathcal {C}}$ is infinite). 2. The maximality of ${\mathcal {C}}$ in $\mathbb {S} $ implies that if $V$ is an open set of $X$ such that $V\not \in {\mathcal {C}}$ then ${\mathcal {C}}\cup \{V\}$ has a finite subcover, which must necessarily be of the form $\{V\}\cup {\mathcal {C}}_{V}$ for some finite subset ${\mathcal {C}}_{V}$ of ${\mathcal {C}}$ (this finite subset depends on the choice of $V$). We will begin by showing that ${\mathcal {C}}\cap {\mathcal {S}}$ is not a cover of $X.$ Suppose that ${\mathcal {C}}\cap {\mathcal {S}}$ was a cover of $X,$ which in particular implies that ${\mathcal {C}}\cap {\mathcal {S}}$ is a cover of $X$ by elements of ${\mathcal {S}}.$ The theorem's hypothesis on ${\mathcal {S}}$ implies that there exists a finite subset of ${\mathcal {C}}\cap {\mathcal {S}}$ that covers $X,$ which would simultaneously also be a finite subcover of $X$ by elements of ${\mathcal {C}}$ (since ${\mathcal {C}}\cap {\mathcal {S}}\subseteq {\mathcal {C}}$). But this contradicts ${\mathcal {C}}\in \mathbb {S} ,$ which proves that ${\mathcal {C}}\cap {\mathcal {S}}$ does not cover $X.$ Since ${\mathcal {C}}\cap {\mathcal {S}}$ does not cover $X,$ there exists some $x\in X$ that is not covered by ${\mathcal {C}}\cap {\mathcal {S}}$ (that is, $x$ is not contained in any element of ${\mathcal {C}}\cap {\mathcal {S}}$). But since ${\mathcal {C}}$ does cover $X,$ there also exists some $U\in {\mathcal {C}}$ such that $x\in U.$ Since ${\mathcal {S}}$ is a subbasis generating $X$'s topology, from the definition of the topology generated by ${\mathcal {S}},$ there must exist a finite collection of subbasic open sets $S_{1},\ldots ,S_{n}\in {\mathcal {S}}$ such that $x\in S_{1}\cap \cdots \cap S_{n}\subseteq U.$ We will now show by contradiction that $S_{i}\not \in {\mathcal {C}}$ for every $i=1,\ldots ,n.$ If $i$ was such that $S_{i}\in {\mathcal {C}},$ then also $S_{i}\in {\mathcal {C}}\cap {\mathcal {S}}$ so the fact that $x\in S_{i}$ would then imply that $x$ is covered by ${\mathcal {C}}\cap {\mathcal {S}},$ which contradicts how $x$ was chosen (recall that $x$ was chosen specifically so that it was not covered by ${\mathcal {C}}\cap {\mathcal {S}}$). As mentioned earlier, the maximality of ${\mathcal {C}}$ in $\mathbb {S} $ implies that for every $i=1,\ldots ,n,$ there exists a finite subset ${\mathcal {C}}_{S_{i}}$ of ${\mathcal {C}}$ such that$\left\{S_{i}\right\}\cup {\mathcal {C}}_{S_{i}}$ forms a finite cover of $X.$ Define ${\mathcal {C}}_{F}:={\mathcal {C}}_{S_{1}}\cup \cdots \cup {\mathcal {C}}_{S_{n}},$ which is a finite subset of ${\mathcal {C}}.$ Observe that for every $i=1,\ldots ,n,$ $\left\{S_{i}\right\}\cup {\mathcal {C}}_{F}$ is a finite cover of $X$ so let us replace every ${\mathcal {C}}_{S_{i}}$ with ${\mathcal {C}}_{F}.$ Let $\cup {\mathcal {C}}_{F}$ denote the union of all sets in ${\mathcal {C}}_{F}$ (which is an open subset of $X$) and let $Z$ denote the complement of $\cup {\mathcal {C}}_{F}$ in $X.$ Observe that for any subset $A\subseteq X,$ $\{A\}\cup {\mathcal {C}}_{F}$ covers $X$ if and only if $Z\subseteq A.$ In particular, for every $i=1,\ldots ,n,$ the fact that $\left\{S_{i}\right\}\cup {\mathcal {C}}_{F}$ covers $X$ implies that $Z\subseteq S_{i}.$ Since $i$ was arbitrary, we have $Z\subseteq S_{1}\cap \cdots \cap S_{n}.$ Recalling that $S_{1}\cap \cdots \cap S_{n}\subseteq U,$ we thus have $Z\subseteq U,$ which is equivalent to $\{U\}\cup {\mathcal {C}}_{F}$ being a cover of $X.$ Moreover, $\{U\}\cup {\mathcal {C}}_{F}$ is a finite cover of $X$ with $\{U\}\cup {\mathcal {C}}_{F}\subseteq {\mathcal {C}}.$ Thus ${\mathcal {C}}$ has a finite subcover of $X,$ which contradicts the fact that ${\mathcal {C}}\in \mathbb {S} .$ Therefore, the original assumption that $X$ is not compact must be wrong, which proves that $X$ is compact. $\blacksquare $ Although this proof makes use of Zorn's Lemma, the proof does not need the full strength of choice. Instead, it relies on the intermediate Ultrafilter principle.[3] Using this theorem with the subbase for $\mathbb {R} $ above, one can give a very easy proof that bounded closed intervals in $\mathbb {R} $ are compact. More generally, Tychonoff's theorem, which states that the product of non-empty compact spaces is compact, has a short proof if the Alexander Subbase Theorem is used. Proof The product topology on $\prod _{i}X_{i}$ has, by definition, a subbase consisting of cylinder sets that are the inverse projections of an open set in one factor. Given a subbasic family $C$ of the product that does not have a finite subcover, we can partition $C=\cup _{i}C_{i}$ into subfamilies that consist of exactly those cylinder sets corresponding to a given factor space. By assumption, if $C_{i}\neq \varnothing $ then $C_{i}$ does not have a finite subcover. Being cylinder sets, this means their projections onto $X_{i}$ have no finite subcover, and since each $X_{i}$ is compact, we can find a point $x_{i}\in X_{i}$ that is not covered by the projections of $C_{i}$ onto $X_{i}.$ But then $\left(x_{i}\right)_{i}\in \prod _{i}X_{i}$ is not covered by $C.$ $\blacksquare $ Note, that in the last step we implicitly used the axiom of choice (which is actually equivalent to Zorn's lemma) to ensure the existence of $\left(x_{i}\right)_{i}.$ See also • Base (topology) – Collection of open sets used to define a topology Notes 1. Since $\nu $ is a topology on $Y$ and $Y$ is an open subset of $(X,\tau ),$ it is easy to verify that $\{X\}\cup \nu $ is a topology on $X.$ Since $\nu $ is not a topology on $X,$ $\{X\}\cup \nu $ is clearly the smallest topology on $X$ containing $\nu $). Citations 1. Rudin 1991, p. 392 Appendix A2. 2. Munkres 2000, pp. 82. 3. Muger, Michael (2020). Topology for the Working Mathematician. References • Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. • Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. • Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Wikipedia
Abnormal subgroup In mathematics, specifically group theory, an abnormal subgroup is a subgroup H of a group G such that for all x in G, x lies in the subgroup generated by H and H x, where H x denotes the conjugate subgroup xHx−1. Here are some facts relating abnormality to other subgroup properties: • Every abnormal subgroup is a self-normalizing subgroup, as well as a contranormal subgroup. • The only normal subgroup that is also abnormal is the whole group. • Every abnormal subgroup is a weakly abnormal subgroup, and every weakly abnormal subgroup is a self-normalizing subgroup. • Every abnormal subgroup is a pronormal subgroup, and hence a weakly pronormal subgroup, a paranormal subgroup, and a polynormal subgroup. References • Fattahi, Abiabdollah (January 1974). "Groups with only normal and abnormal subgroups". Journal of Algebra. Elsevier. 28 (1): 15–19. doi:10.1016/0021-8693(74)90019-2. • Zhang, Q. H. (1996). "Finite groups with only seminormal and abnormal subgroups". J. Math. Study. 29 (4): 10–15. • Zhang, Q. H. (1998). "Finite groups with only ss-quasinormal and abnormal subgroups". Northeast. Math. J. 14 (1): 41–46. • Zhang, Q. H. (1999). "s-Semipermutability and abnormality in finite groups". Comm. Algebra. 27 (9): 4515–4524. doi:10.1080/00927879908826711.
Wikipedia
Subadditivity In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions. Definitions A subadditive function is a function $f\colon A\to B$, having a domain A and an ordered codomain B that are both closed under addition, with the following property: $\forall x,y\in A,f(x+y)\leq f(x)+f(y).$ An example is the square root function, having the non-negative real numbers as domain and codomain, since $\forall x,y\geq 0$ we have: ${\sqrt {x+y}}\leq {\sqrt {x}}+{\sqrt {y}}.$ A sequence $\left\{a_{n}\right\},n\geq 1$, is called subadditive if it satisfies the inequality $a_{n+m}\leq a_{n}+a_{m}$ for all m and n. This is a special case of subadditive function, if a sequence is interpreted as a function on the set of natural numbers. Note that while a concave sequence is subadditive, the converse is false. For example, randomly assign $a_{1},a_{2},...$ with values in $0.5,1$, then the sequence is subadditive but not concave. Properties Sequences A useful result pertaining to subadditive sequences is the following lemma due to Michael Fekete.[1] Fekete's Subadditive Lemma — For every subadditive sequence ${\left\{a_{n}\right\}}_{n=1}^{\infty }$, the limit $\displaystyle \lim _{n\to \infty }{\frac {a_{n}}{n}}$ exists and is equal to the infimum $\inf {\frac {a_{n}}{n}}$. (The limit may be $-\infty $.) Proof Let $s^{*}:=\inf _{n}{\frac {a_{n}}{n}}$. By definition, $\liminf _{n}{\frac {a_{n}}{n}}\geq s^{*}$. So it suffices to show $\limsup _{n}{\frac {a_{n}}{n}}\leq s^{*}$. If not, then there exists sequence $(a_{n_{k}})_{k}$ and $\epsilon >0$ such that ${\frac {a_{n_{k}}}{n_{k}}}>s^{*}+\epsilon $ for all $k$. Take $a_{m}$ such that ${\frac {a_{m}}{m}}<s^{*}+\epsilon /2$. By infinitary pigeonhole principle, we get a subsequence $(a_{n_{k}})_{k}$, whose indices all belong to the same residue class modulo $m$, and so they advance by multiples of $m$. This sequence, continued for long enough, would be forced by subadditivity to dip below the $s^{*}+\epsilon $ slope line, a contradiction. The analogue of Fekete's lemma holds for superadditive sequences as well, that is: $a_{n+m}\geq a_{n}+a_{m}.$ (The limit then may be positive infinity: consider the sequence $a_{n}=\log n!$.) There are extensions of Fekete's lemma that do not require the inequality $a_{n+m}\leq a_{n}+a_{m}$ to hold for all m and n, but only for m and n such that $ {\frac {1}{2}}\leq {\frac {m}{n}}\leq 2.$ Proof Continue the proof as before, until we have just used the infinite pigeonhole principle. Consider the sequence $a_{m},a_{2m},a_{3m},...$. Since $2m/m=2$, we have $a_{2m}\leq 2a_{m}$. Similarly, we have $a_{3m}\leq a_{2m}+a_{m}\leq 3a_{m}$, etc. By the assumption, for any $s,t\in \mathbb {N} $, we can use subadditivity on them if $\ln(s+t)\in [\ln(1.5s),\ln(3s)]=\ln s+[\ln 1.5,\ln 3]$ If we were dealing with continuous variables, then we can use subadditivity to go from $a_{n_{k}}$ to $a_{n_{k}}+[\ln 1.5,\ln 3]$, then to $a_{n_{k}}+\ln 1.5+[\ln 1.5,\ln 3]$, and so on, which covers the entire interval $a_{n_{k}}+[\ln 1.5,+\infty )$. Though we don't have continuous variables, we can still cover enough integers to complete the proof. Let $n_{k}$ be large enough, such that $\ln(2)>\ln(1.5)+\ln \left({\frac {1.5n_{k}+m}{1.5n_{k}}}\right)$ then let $n'$ be the smallest number in the intersection $(n_{k}+m\mathbb {Z} )\cap (\ln n_{k}+[\ln(1.5),\ln(3)])$. By the assumption on $n_{k}$, it's easy to see (draw a picture) that the intervals $\ln n_{k}+[\ln(1.5),\ln(3)]$ and $\ln n'+[\ln(1.5),\ln(3)]$ touch in the middle. Thus, by repeating this process, we cover the entirety of $(n_{k}+m\mathbb {Z} )\cap (\ln n_{k}+[\ln(1.5),\infty ])$. With that, all $a_{n_{k}},a_{n_{k+1}},...$ are forced down as in the previous proof. Moreover, the condition $a_{n+m}\leq a_{n}+a_{m}$ may be weakened as follows: $a_{n+m}\leq a_{n}+a_{m}+\phi (n+m)$ provided that $\phi $ is an increasing function such that the integral $ \int \phi (t)t^{-2}\,dt$ converges (near the infinity).[2] There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present.[3][4] Besides, analogues of Fekete's lemma have been proved for subadditive real maps (with additional assumptions) from finite subsets of an amenable group [5] [6] ,[7] and further, of a cancellative left-amenable semigroup.[8] Functions Theorem:[9] — For every measurable subadditive function $f:(0,\infty )\to \mathbb {R} ,$ the limit $\lim _{t\to \infty }{\frac {f(t)}{t}}$ exists and is equal to $\inf _{t>0}{\frac {f(t)}{t}}.$ (The limit may be $-\infty .$) If f is a subadditive function, and if 0 is in its domain, then f(0) ≥ 0. To see this, take the inequality at the top. $f(x)\geq f(x+y)-f(y)$. Hence $f(0)\geq f(0+y)-f(y)=0$ A concave function $f:[0,\infty )\to \mathbb {R} $ with $f(0)\geq 0$ is also subadditive. To see this, one first observes that $f(x)\geq \textstyle {\frac {y}{x+y}}f(0)+\textstyle {\frac {x}{x+y}}f(x+y)$. Then looking at the sum of this bound for $f(x)$ and $f(y)$, will finally verify that f is subadditive.[10] The negative of a subadditive function is superadditive. Examples in various domains Entropy Entropy plays a fundamental role in information theory and statistical physics, as well as in quantum mechanics in a generalized formulation due to von Neumann. Entropy appears always as a subadditive quantity in all of its formulations, meaning the entropy of a supersystem or a set union of random variables is always less or equal than the sum of the entropies of its individual components. Additionally, entropy in physics satisfies several more strict inequalities such as the Strong Subadditivity of Entropy in classical statistical mechanics and its quantum analog. Economics Subadditivity is an essential property of some particular cost functions. It is, generally, a necessary and sufficient condition for the verification of a natural monopoly. It implies that production from only one firm is socially less expensive (in terms of average costs) than production of a fraction of the original quantity by an equal number of firms. Economies of scale are represented by subadditive average cost functions. Except in the case of complementary goods, the price of goods (as a function of quantity) must be subadditive. Otherwise, if the sum of the cost of two items is cheaper than the cost of the bundle of two of them together, then nobody would ever buy the bundle, effectively causing the price of the bundle to "become" the sum of the prices of the two separate items. Thus proving that it is not a sufficient condition for a natural monopoly; since the unit of exchange may not be the actual cost of an item. This situation is familiar to everyone in the political arena where some minority asserts that the loss of some particular freedom at some particular level of government means that many governments are better; whereas the majority assert that there is some other correct unit of cost. Finance Subadditivity is one of the desirable properties of coherent risk measures in risk management.[11] The economic intuition behind risk measure subadditivity is that a portfolio risk exposure should, at worst, simply equal the sum of the risk exposures of the individual positions that compose the portfolio. In any other case the effects of diversification would result in a portfolio exposure that is lower than the sum of the individual risk exposures. The lack of subadditivity is one of the main critiques of VaR models which do not rely on the assumption of normality of risk factors. The Gaussian VaR ensures subadditivity: for example, the Gaussian VaR of a two unitary long positions portfolio $V$ at the confidence level $1-p$ is, assuming that the mean portfolio value variation is zero and the VaR is defined as a negative loss, ${\text{VaR}}_{p}\equiv z_{p}\sigma _{\Delta V}=z_{p}{\sqrt {\sigma _{x}^{2}+\sigma _{y}^{2}+2\rho _{xy}\sigma _{x}\sigma _{y}}}$ where $z_{p}$ is the inverse of the normal cumulative distribution function at probability level $p$, $\sigma _{x}^{2},\sigma _{y}^{2}$ are the individual positions returns variances and $\rho _{xy}$ is the linear correlation measure between the two individual positions returns. Since variance is always positive, ${\sqrt {\sigma _{x}^{2}+\sigma _{y}^{2}+2\rho _{xy}\sigma _{x}\sigma _{y}}}\leq \sigma _{x}+\sigma _{y}$ Thus the Gaussian VaR is subadditive for any value of $\rho _{xy}\in [-1,1]$ and, in particular, it equals the sum of the individual risk exposures when $\rho _{xy}=1$ which is the case of no diversification effects on portfolio risk. Thermodynamics Subadditivity occurs in the thermodynamic properties of non-ideal solutions and mixtures like the excess molar volume and heat of mixing or excess enthalpy. Combinatorics on words A factorial language $L$ is one where if a word is in $L$, then all factors of that word are also in $L$. In combinatorics on words, a common problem is to determine the number $A(n)$ of length-$n$ words in a factorial language. Clearly $A(m+n)\leq A(m)A(n)$, so $\log A(n)$ is subadditive, and hence Fekete's lemma can be used to estimate the growth of $A(n)$.[12] For every $k\geq 1$, sample two strings of length $n$ uniformly at random on the alphabet $1,2,...,k$. The expected length of the longest common subsequence is a super-additive function of $n$, and thus there exists a number $\gamma _{k}\geq 0$, such that the expected length grows as $\sim \gamma _{k}n$. By checking the case with $n=1$, we easily have ${\frac {1}{k}}<\gamma _{k}\leq 1$. The exact value of even $\gamma _{2}$, however, is only known to be between 0.788 and 0.827.[13] See also • Apparent molar property – Difference in properties of one mole of substance in a mixture vs. an ideal solution • Choquet integral • Superadditivity • Triangle inequality – Property of geometry, also used to generalize the notion of "distance" in metric spaces Notes 1. Fekete, M. (1923). "Über die Verteilung der Wurzeln bei gewissen algebraischen Gleichungen mit ganzzahligen Koeffizienten". Mathematische Zeitschrift. 17 (1): 228–249. doi:10.1007/BF01504345. S2CID 186223729. 2. de Bruijn, N.G.; Erdös, P. (1952). "Some linear and some quadratic recursion formulas. II". Nederl. Akad. Wetensch. Proc. Ser. A. 55: 152–163. doi:10.1016/S1385-7258(52)50021-0. (The same as Indagationes Math. 14.) See also Steele 1997, Theorem 1.9.2. 3. Michael J. Steele. "Probability theory and combinatorial optimization". SIAM, Philadelphia (1997). ISBN 0-89871-380-3. 4. Michael J. Steele (2011). CBMS Lectures on Probability Theory and Combinatorial Optimization. University of Cambridge. 5. Lindenstrauss, Elon; Weiss, Benjamin (2000). "Mean topological dimension". Israel Journal of Mathematics. 115 (1): 1–24. CiteSeerX 10.1.1.30.3552. doi:10.1007/BF02810577. ISSN 0021-2172. Theorem 6.1 6. Ornstein, Donald S.; Weiss, Benjamin (1987). "Entropy and isomorphism theorems for actions of amenable groups". Journal d'Analyse Mathématique. 48 (1): 1–141. doi:10.1007/BF02790325. ISSN 0021-7670. 7. Gromov, Misha (1999). "Topological Invariants of Dynamical Systems and Spaces of Holomorphic Maps: I". Mathematical Physics, Analysis and Geometry. 2 (4): 323–415. doi:10.1023/A:1009841100168. ISSN 1385-0172. S2CID 117100302. 8. Ceccherini-Silberstein, Tullio; Krieger, Fabrice; Coornaert, Michel (2014). "An analogue of Fekete's lemma for subadditive functions on cancellative amenable semigroups". Journal d'Analyse Mathématique. 124: 59–81. arXiv:1209.6179. doi:10.1007/s11854-014-0027-4. Theorem 1.1 9. Hille 1948, Theorem 6.6.1. (Measurability is stipulated in Sect. 6.2 "Preliminaries".) 10. Schechter, Eric (1997). Handbook of Analysis and its Foundations. San Diego: Academic Press. ISBN 978-0-12-622760-4., p.314,12.25 11. Rau-Bredow, H. (2019). "Bigger Is Not Always Safer: A Critical Analysis of the Subadditivity Assumption for Coherent Risk Measures". Risks. 7 (3): 91. doi:10.3390/risks7030091. 12. Shur, Arseny (2012). "Growth properties of power-free languages". Computer Science Review. 6 (5–6): 187–208. doi:10.1016/j.cosrev.2012.09.001. 13. Lueker, George S. (May 2009). "Improved bounds on the average length of longest common subsequences". Journal of the ACM. 56 (3): 1–38. doi:10.1145/1516512.1516519. ISSN 0004-5411. S2CID 7232681. References • György Pólya and Gábor Szegő. "Problems and theorems in analysis, volume 1". Springer-Verlag, New York (1976). ISBN 0-387-05672-6. • Einar Hille. "Functional analysis and semi-groups". American Mathematical Society, New York (1948). • N.H. Bingham, A.J. Ostaszewski. "Generic subadditive functions." Proceedings of American Mathematical Society, vol. 136, no. 12 (2008), pp. 4257–4266. External links This article incorporates material from subadditivity on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Subadditive set function In mathematics, a subadditive set function is a set function whose value, informally, has the property that the value of function on the union of two sets is at most the sum of values of the function on each of the sets. This is thematically related to the subadditivity property of real-valued functions. Definition Let $\Omega $ be a set and $f\colon 2^{\Omega }\rightarrow \mathbb {R} $ be a set function, where $2^{\Omega }$ denotes the power set of $\Omega $. The function f is subadditive if for each subset $S$ and $T$ of $\Omega $, we have $f(S)+f(T)\geq f(S\cup T)$.[1][2] Examples of subadditive functions Every non-negative submodular set function is subadditive (the family of non-negative submodular functions is strictly contained in the family of subadditive functions). The function that counts the number of sets required to cover a given set is subadditive. Let $T_{1},\dotsc ,T_{m}\subseteq \Omega $ such that $\cup _{i=1}^{m}T_{i}=\Omega $. Define $f$ as the minimum number of subsets required to cover a given set. Formally, $f(S)$ is the minimum number $t$ such that there are sets $T_{i_{1}},\dotsc ,T_{i_{t}}$ satisfying $S\subseteq \cup _{j=1}^{t}T_{i_{j}}$. Then $f$ is subadditive. The maximum of additive set functions is subadditive (dually, the minimum of additive functions is superadditive). Formally, for each $i\in \{1,\dotsc ,m\}$, let $a_{i}\colon \Omega \to \mathbb {R} _{+}$ be additive set functions. Then $f(S)=\max _{i}\left(\sum _{x\in S}a_{i}(x)\right)$ is a subadditive set function. Fractionally subadditive set functions are a generalization of submodular functions and a special case of subadditive functions. A subadditive function $f$ is furthermore fractionally subadditive if it satisfies the following definition.[1] For every $S\subseteq \Omega $, every $X_{1},\dotsc ,X_{n}\subseteq \Omega $, and every $\alpha _{1},\dotsc ,\alpha _{n}\in [0,1]$, if $1_{S}\leq \sum _{i=1}^{n}\alpha _{i}1_{X_{i}}$, then $f(S)\leq \sum _{i=1}^{n}\alpha _{i}f(X_{i})$. The set of fractionally subadditive functions equals the set of functions that can be expressed as the maximum of additive functions, as in the example in the previous paragraph.[1] See also • Submodular set function • Utility functions on indivisible goods Citations 1. Feige, Uriel (2009). "On Maximizing Welfare when Utility Functions are Subadditive". SIAM Journal on Computing. 39 (1): 122–142. doi:10.1137/070680977. 2. Dobzinski, Shahar; Nisan, Noam; Schapira, Michael (2010). "Approximation Algorithms for Combinatorial Auctions with Complement-Free Bidders". Mathematics of Operations Research. 35 (1): 1–13. doi:10.1145/1060590.1060681. S2CID 2685385.
Wikipedia
Subalgebra In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations. "Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. "Subalgebra" can refer to either case. Subalgebras for algebras over a ring or field A subalgebra of an algebra over a commutative ring or field is a vector subspace which is closed under the multiplication of vectors. The restriction of the algebra multiplication makes it an algebra over the same ring or field. This notion also applies to most specializations, where the multiplication must satisfy additional properties, e.g. to associative algebras or to Lie algebras. Only for unital algebras is there a stronger notion, of unital subalgebra, for which it is also required that the unit of the subalgebra be the unit of the bigger algebra. Example The 2×2-matrices over the reals form a unital algebra in the obvious way. The 2×2-matrices for which all entries are zero, except for the first one on the diagonal, form a subalgebra. It is also unital, but it is not a unital subalgebra. Subalgebras in universal algebra Main article: Substructure (mathematics) In universal algebra, a subalgebra of an algebra A is a subset S of A that also has the structure of an algebra of the same type when the algebraic operations are restricted to S. If the axioms of a kind of algebraic structure is described by equational laws, as is typically the case in universal algebra, then the only thing that needs to be checked is that S is closed under the operations. Some authors consider algebras with partial functions. There are various ways of defining subalgebras for these. Another generalization of algebras is to allow relations. These more general algebras are usually called structures, and they are studied in model theory and in theoretical computer science. For structures with relations there are notions of weak and of induced substructures. Example For example, the standard signature for groups in universal algebra is (•, −1, 1). (Inversion and unit are needed to get the right notions of homomorphism and so that the group laws can be expressed as equations.) Therefore, a subgroup of a group G is a subset S of G such that: • the identity e of G belongs to S (so that S is closed under the identity constant operation); • whenever x belongs to S, so does x−1 (so that S is closed under the inverse operation); • whenever x and y belong to S, so does x • y (so that S is closed under the group's multiplication operation). References • Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64243-5 • Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
Wikipedia
Subbundle In mathematics, a subbundle $U$ of a vector bundle $V$ on a topological space $X$ is a collection of linear subspaces $U_{x}$of the fibers $V_{x}$ of $V$ at $x$ in $X,$ that make up a vector bundle in their own right. In connection with foliation theory, a subbundle of the tangent bundle of a smooth manifold may be called a distribution (of tangent vectors). If a set of vector fields $Y_{k}$ span the vector space $U,$ and all Lie commutators $\left[Y_{i},Y_{j}\right]$ are linear combinations of the $Y_{k},$ then one says that $U$ is an involutive distribution. See also • Frobenius theorem (differential topology) – On finding a maximal set of solutions of a system of first-order homogeneous linear PDEs • Sub-Riemannian manifold – Type of generalization of a Riemannian manifold Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Subclass (set theory) In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set. That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B.[1] If A and B are sets, then of course A is also a subset of B. In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set. As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!) References 1. Charles C.Pinter (2013). A Book of Set Theory. Dover Publications Inc. p. 240. ISBN 978-0486497082.
Wikipedia
Concept class In computational learning theory in mathematics, a concept over a domain X is a total Boolean function over X. A concept class is a class of concepts. Concept classes are a subject of computational learning theory. Concept class terminology frequently appears in model theory associated with probably approximately correct (PAC) learning.[1] In this setting, if one takes a set Y as a set of (classifier output) labels, and X is a set of examples, the map $c:X\to Y$, i.e. from examples to classifier labels (where $Y=\{0,1\}$ and where c is a subset of X), c is then said to be a concept. A concept class $C$ is then a collection of such concepts. Given a class of concepts C, a subclass D is reachable if there exists a sample s such that D contains exactly those concepts in C that are extensions to s.[2] Not every subclass is reachable.[2] Background A sample $s$ is a partial function from $X$ to $\{0,1\}$.[2] Identifying a concept with its characteristic function mapping $X$ to $\{0,1\}$, it is a special case of a sample.[2] Two samples are consistent if they agree on the intersection of their domains.[2] A sample $s'$ extends another sample $s$ if the two are consistent and the domain of $s$ is contained in the domain of $s'$.[2] Examples Suppose that $C=S^{+}(X)$. Then: • the subclass $\{\{x\}\}$ is reachable with the sample $s=\{(x,1)\}$;[2] • the subclass $S^{+}(Y)$ for $Y\subseteq X$ are reachable with a sample that maps the elements of $X-Y$ to zero;[2] • the subclass $S(X)$, which consists of the singleton sets, is not reachable.[2] Applications Let $C$ be some concept class. For any concept $c\in C$, we call this concept $1/d$-good for a positive integer $d$ if, for all $x\in X$, at least $1/d$ of the concepts in $C$ agree with $c$ on the classification of $x$.[2] The fingerprint dimension $FD(C)$ of the entire concept class $C$ is the least positive integer $d$ such that every reachable subclass $C'\subseteq C$ contains a concept that is $1/d$-good for it.[2] This quantity can be used to bound the minimum number of equivalence queries needed to learn a class of concepts according to the following inequality:$ FD(C)-1\leq \#EQ(C)\leq \lceil FD(C)\ln(|C|)\rceil $.[2] References 1. Chase, H., & Freitag, J. (2018). Model Theory and Machine Learning. arXiv preprint arXiv:1801.06566. 2. Angluin, D. (2004). "Queries revisited" (PDF). Theoretical Computer Science. 313 (2): 188–191. doi:10.1016/j.tcs.2003.11.004.
Wikipedia
Subcoloring In graph theory, a subcoloring is an assignment of colors to a graph's vertices such that each color class induces a vertex disjoint union of cliques. That is, each color class should form a cluster graph. The subchromatic number χS(G) of a graph G is the fewest colors needed in any subcoloring of G. Subcoloring and subchromatic number were introduced by Albertson et al. (1989). Every proper coloring and cocoloring of a graph are also subcolorings, so the subchromatic number of any graph is at most equal to the cochromatic number, which is at most equal to the chromatic number. Subcoloring is as difficult to solve exactly as coloring, in the sense that (like coloring) it is NP-complete. More specifically, the problem of determining whether a planar graph has subchromatic number at most 2 is NP-complete, even if it is a • triangle-free graph with maximum degree 4 (Gimbel & Hartman 2003) (Fiala et al. 2003), • comparability graph with maximum degree 4 (Ochem 2017), • line graph of a bipartite graph with maximum degree 4 (Gonçalves & Ochem 2009), • graph with girth 5 (Montassier & Ochem 2015). The subchromatic number of a cograph can be computed in polynomial time (Fiala et al. 2003). For every fixed integer r, it is possible to decide in polynomial time whether the subchromatic number of interval and permutation graphs is at most r (Broersma et al. 2002). References • Albertson, M. O.; Jamison, R. E.; Hedetniemi, S. T.; Locke, S. C. (1989), "The subchromatic number of a graph", Discrete Mathematics, 74 (1–2): 33–49, doi:10.1016/0012-365X(89)90196-9. • Broersma, Hajo; Fomin, Fedor V.; Nesetril, Jaroslav; Woeginger, Gerhard (2002), "More About Subcolorings", Computing, 69 (3): 187–203, doi:10.1007/s00607-002-1461-1, S2CID 24777938. • Fiala, J.; Klaus, J.; Le, V. B.; Seidel, E. (2003), "Graph Subcolorings: Complexity and Algorithms", SIAM Journal on Discrete Mathematics, 16 (4): 635–650, CiteSeerX 10.1.1.3.183, doi:10.1137/S0895480101395245. • Gimbel, John; Hartman, Chris (2003), "Subcolorings and the subchromatic number of a graph", Discrete Mathematics, 272 (2–3): 139–154, doi:10.1016/S0012-365X(03)00177-8. • Gonçalves, Daniel; Ochem, Pascal (2009), "On star and caterpillar arboricity", Discrete Mathematics, 309 (11): 3694–3702, doi:10.1016/j.disc.2008.01.041. • Montassier, Mickael; Ochem, Pascal (2015), "Near-Colorings: Non-Colorable Graphs and NP-Completeness", Electronic Journal of Combinatorics, 22 (1): #P1.57, arXiv:1306.0752, doi:10.37236/3509, S2CID 59507. • Ochem, Pascal (2017), "2-subcoloring is NP-complete for planar comparability graphs", Information Processing Letters, 128: 46–48, arXiv:1702.01283, doi:10.1016/j.ipl.2017.08.004, S2CID 22108461.
Wikipedia
Subcompact cardinal In mathematics, a subcompact cardinal is a certain kind of large cardinal number. A cardinal number κ is subcompact if and only if for every A ⊂ H(κ+) there is a non-trivial elementary embedding j:(H(μ+), B) → (H(κ+), A) (where H(κ+) is the set of all sets of cardinality hereditarily less than κ+) with critical point μ and j(μ) = κ. Analogously, κ is a quasicompact cardinal if and only if for every A ⊂ H(κ+) there is a non-trivial elementary embedding j:(H(κ+), A) → (H(μ+), B) with critical point κ and j(κ) = μ. H(λ) consists of all sets whose transitive closure has cardinality less than λ. Every quasicompact cardinal is subcompact. Quasicompactness is a strengthening of subcompactness in that it projects large cardinal properties upwards. The relationship is analogous to that of extendible versus supercompact cardinals. Quasicompactness may be viewed as a strengthened or "boldface" version of 1-extendibility. Existence of subcompact cardinals implies existence of many 1-extendible cardinals, and hence many superstrong cardinals. Existence of a 2κ-supercompact cardinal κ implies existence of many quasicompact cardinals. Subcompact cardinals are noteworthy as the least large cardinals implying a failure of the square principle. If κ is subcompact, then the square principle fails at κ. Canonical inner models at the level of subcompact cardinals satisfy the square principle at all but subcompact cardinals. (Existence of such models has not yet been proved, but in any case the square principle can be forced for weaker cardinals.) Quasicompactness is one of the strongest large cardinal properties that can be witnessed by current inner models that do not use long extenders. For current inner models, the elementary embeddings included are determined by their effect on P(κ) (as computed at the stage the embedding is included), where κ is the critical point. This prevents them from witnessing even a κ+ strongly compact cardinal κ. Subcompact and quasicompact cardinals were defined by Ronald Jensen. References • "Square in Core Models" in the September 2001 issue of the Bulletin of Symbolic Logic
Wikipedia
Subcountability In constructive mathematics, a collection $X$ is subcountable if there exists a partial surjection from the natural numbers onto it. This may be expressed as $\exists (I\subseteq {\mathbb {N} }).\,\exists f.\,(f\colon I\twoheadrightarrow X),$ where $f\colon I\twoheadrightarrow X$ denotes that $f$ is a surjective function from a $I$ onto $X$. The surjection is a member of ${\mathbb {N} }\rightharpoonup X$ and here the subclass $I$ of ${\mathbb {N} }$ is required to be a set. In other words, all elements of a subcountable collection $X$ are functionally in the image of an indexing set of counting numbers $I\subseteq {\mathbb {N} }$ and thus the set $X$ can be understood as being dominated by the countable set ${\mathbb {N} }$. Discussion Nomenclature Note that nomenclature of countability and finiteness properties vary substantially - in part because many of them coincide when assuming excluded middle. To reiterate, the discussion here concerns the property defined in terms of surjections onto the set $X$ being characterized. The language here is common in constructive set theory texts, but the name subcountable has otherwise also been given to properties in terms of injections out of the set being characterized. The set ${\mathbb {N} }$ in the definition can also be abstracted away, and in terms of the more general notion $X$ may be called a subquotient of ${\mathbb {N} }$. Example Important cases are where the set in question is some subclass of a bigger class of functions as studied in computability theory. There cannot be a computable surjection $n\mapsto f_{n}$ from ${\mathbb {N} }$ onto the set of total computable functions $X$, as demonstrated via the function $n\mapsto f_{n}(n)+1$ from the diagonal construction, which could never be in such a surjections image. However, via the codes of all possible partial computable functions, which also allows non-terminating programs, such subsets of functions, such as the total functions, are seen to be subcountable sets: The total functions are the range of some strict subset $I$ of the natural numbers. Being dominated by an uncomputable, and so constructively uncountable, set of numbers, the name subcountable thus conveys that the constructively uncountable set $X$ is no bigger than ${\mathbb {N} }$. Note that no effective map between all counting numbers ${\mathbb {N} }$ and the unbounded and non-finite indexing set $I$ is asserted here, merely the subset relation $I\subseteq {\mathbb {N} }$. Being total is famously not a decidable property. By Rice's theorem on index sets, most domains of indices are, in fact, not computable sets. The demonstration that $X$ is subcountable also implies that it is classically (non-constructively) formally countable, but this does not reflect any effective countability. In other words, the fact that an algorithm listing all total functions in sequence cannot be coded up is not captured by classical axioms regarding set and function existence. We see that, depending on the axioms of a theory, subcountability may be more likely to be provable than countability. Relation to excluded middle In constructive logics and set theories tie the existence of a function between infinite (non-finite) sets to questions of decidability and possibly of effectivity. There, the subcountability property splits from countability and is thus not a redundant notion. The indexing set $I$ of natural numbers may be posited to exist, e.g. as a subset via set theoretical axioms like the separation axiom schema. Then by definition of $I\subseteq {\mathbb {N} }$, $\forall (i\in I).(i\in {\mathbb {N} }).$ But this set may then still fail to be detachable, in the sense that $\forall (n\in {\mathbb {N} }).{\big (}(n\in I)\lor \neg (n\in I){\big )}$ may not be provable without assuming it as an axiom. One may fail to effectively count the subcountable set $X$ if one fails to map the counting numbers ${\mathbb {N} }$ into the indexing set $I$, for this reason. Being countable implies being subcountable. But the converse does not generally hold without asserting the law of excluded middle, i.e. that for all proposition $\phi $ holds $\phi \lor \neg \phi $. In classical mathematics Asserting all laws of classical logic, the disjunctive property of $I$ discussed above indeed does hold for all sets. Then, for nonempty $X$, the properties numerable (which here shall mean that $X$ injects into ${\mathbb {N} }$), countable (${\mathbb {N} }$ has $X$ as its range), subcountable (a subset of ${\mathbb {N} }$ surjects into $X$) and also not $\omega $-productive (a countability property essentially defined in terms of subsets of $X$) are all equivalent and express that a set is finite or countably infinite. Non-classical assertions Without the law of excluded middle, it can be consistent to assert the subcountability of sets that classically (i.e. non-constructively) exceed the cardinality of the natural numbers. Note that in a constructive setting, a countability claim about the function space ${\mathbb {N} }^{\mathbb {N} }$ out of the full set ${\mathbb {N} }$, as in ${\mathbb {N} }\twoheadrightarrow {\mathbb {N} }^{\mathbb {N} }$, may be disproven. But subcountability $I\twoheadrightarrow {\mathbb {N} }^{\mathbb {N} }$ of an uncountable set ${\mathbb {N} }^{\mathbb {N} }$ by a set $I\subseteq {\mathbb {N} }$ that is not effectively detachable from ${\mathbb {N} }$ may be permitted. As ${\mathbb {N} }^{\mathbb {N} }$ is uncountable and classically in turn provably not subcountable, the classical framework with its large function space is incompatible with the constructive Church's thesis, an axiom of Russian constructivism. Subcountable and ω-productive are mutually exclusive A set $X$ shall be called $\omega $-productive if, whenever any of its subsets $W\subset X$ is the range of some partial function on ${\mathbb {N} }$, there always exists an element $d\in X\setminus W$ that remains in the complement of that range.[1] If there exists any surjection onto some $X$, then its corresponding compliment as described would equal the empty set $X\setminus X$, and so a subcountable set is never $\omega $-productive. As defined above, the property of being $\omega $-productive associates the range $W$ of any partial function to a particular value $d\in X$ not in the functions range, $d\notin W$. In this way, a set $X$ being $\omega $-productive speaks for how hard it is to generate all the elements of it: They cannot be generated from the naturals using a single function. The $\omega $-productivity property constitutes an obstruction to subcountability. As this also implies uncountability, diagonal arguments often involve this notion, explicitly since the late seventies. One may establish the impossibility of computable enumerability of $X$ by considering only the computably enumerable subsets $W$ and one may require the set of all obstructing $d$'s to be the image of a total recursive so called production function. In set theory, where partial functions are modeled as collection of pairs, the space ${\mathbb {N} }\rightharpoonup X$ given as $\cup _{I\subseteq {\mathbb {N} }}X^{I}$ exactly holds all partial functions on ${\mathbb {N} }$ that have, as their range, only subsets $W$ of $X$. For an $\omega $-productive set $X$ one finds $\forall (w\in ({\mathbb {N} }\rightharpoonup X)).\exists (d\in X).\neg \exists (n\in {\mathbb {N} }).w(n)=d.$ Read constructively, this associates any partial function $w$ with an element $d$ not in that functions range. This property emphasizes the incompatibility of an $\omega $-productive set $X$ with any surjective (possibly partial) function. Below this is applied in the study of subcountability assumptions. Set theories Cantorian arguments on subsets of the naturals As reference theory we look at the constructive set theory CZF, which has Replacement, Bounded Separation, strong Infinity, is agnostic towards the existence of power sets, but includes the axiom that asserts that any function space $Y^{X}$ is set, given $X,Y$ are also sets. In this theory, it is moreover consistent to assert that every set is subcountable. The compatibility of various further axioms is discussed in this section by means of possible surjections on an infinite set of counting numbers $I\subseteq {\mathbb {N} }$. Here ${\mathbb {N} }$ shall denote a model of the standard natural numbers. Recall that for functions $g\colon X\to Y$, by definition of total functionality, there exists a unique return value for all values $x\in X$ in the domain, $\exists !(y\in Y).g(x)=y,$ !(y\in Y).g(x)=y,} and for a subcountable set, the surjection is still total on a subset of ${\mathbb {N} }$. Constructively, fewer such existential claims will be provable than classically. The situations discussed below—onto power classes versus onto function spaces—are different from one another: Opposed to general subclass defining predicates and their truth values (not necessarily provably just true and false), a function (which in programming terms is terminating) does makes accessible information about data for all its subdomains (subsets of the $X$). When as characteristic functions for their subsets, functions, through their return values, decide subset membership. As membership in a generally defined set is not necessarily decidable, the (total) functions $X\to \{0,1\}$ are not automatically in bijection with all the subsets of $X$. So constructively, subsets are a more elaborate concept than characteristic functions. In fact, in the context of some non-classical axioms on top of CZF, even the power class of a singleton, e.g. the class ${\mathcal {P}}\{0\}$ of all subsets of $\{0\}$, is shown to be a proper class. Onto power classes Below, the fact is used that the special case $(P\to \neg P)\to \neg P$ of the negation introduction law implies that $P\leftrightarrow \neg P$ is contradictory. For simplicitly of the argument, assume ${\mathcal {P}}{\mathbb {N} }$ is a set. Then consider a subset $I\subseteq {\mathbb {N} }$ and a function $w\colon I\to {\mathcal {P}}{\mathbb {N} }$. Further, as in Cantor's theorem about power sets, define[2] $d=\{k\in {\mathbb {N} }\mid k\in I\land D(k)\}$ where, $D(k)=\neg (k\in w(k)).$ This is a subclass of ${\mathbb {N} }$ defined in dependency of $w$ and it can also be written $d=\{k\in I\mid \neg (k\in w(k))\}.$ It exists as subset via Separation. Now assuming there exists a number $n\in I$ with $w(n)=d$ implies the contradiction $n\in d\,\leftrightarrow \,\neg (n\in d).$ So as a set, one finds ${\mathcal {P}}{\mathbb {N} }$ is $\omega $-productive in that we can define an obstructing $d$ for any given surjection. Note that the existence of a surjection $f\colon I\twoheadrightarrow {\mathcal {P}}{\mathbb {N} }$ would automatically make ${\mathcal {P}}{\mathbb {N} }$ into a set, via Replacement in CZF, and so this function existence is unconditionally impossible. We conclude that the subcountability axiom, asserting all sets are subcountable, is incompatible with ${\mathcal {P}}{\mathbb {N} }$ being a set, as implied e.g. by the power set axiom. In classical ZFC without Powerset or any of its equivalents, it is also consistent that all subclasses of the reals which are sets are subcountable. In that context, this translates to the statement that all sets of real numbers are countable.[3] Of course, that theory does not have the function space set ${\mathbb {N} }^{\mathbb {N} }$. It is also noteworthy that for any function $h\colon {\mathcal {P}}Y\to Y$, a similar analysis using the subset of its range $\{y\in Y\mid \exists (S\in {\mathcal {P}}Y).y=h(S)\land y\notin S\}$ shows that $h$ cannot be an injection. The situation is more complicated for function spaces.[4] Onto function spaces By definition of function spaces, the set ${\mathbb {N} }^{\mathbb {N} }$ holds those subsets of the set ${\mathbb {N} }\times {\mathbb {N} }$ which are provably total and functional. Asserting the permitted subcountability of all sets turns, in particular, ${\mathbb {N} }^{\mathbb {N} }$ into a subcountable set. So here we consider a surjective function $f\colon I\twoheadrightarrow {\mathbb {N} }^{\mathbb {N} }$ and the subset of ${\mathbb {N} }\times {\mathbb {N} }$ separated as[5] ${\Big \{}\langle n,y\rangle \in {\mathbb {N} }\times {\mathbb {N} }\mid {\big (}n\in I\land D(n,y){\big )}\lor {\big (}\neg (n\in I)\land y=1{\big )}{\Big \}}$ with the diagonalizing predicate defined as $D(n,y)={\big (}\neg (f(n)(n)\geq 1)\land y=1{\big )}\lor {\big (}\neg (f(n)(n)=0)\land y=0{\big )}$ which we can also phrase without the negations as $D(n,y)={\big (}f(n)(n)=0\land y=1{\big )}\lor {\big (}f(n)(n)\geq 1\land y=0{\big )}.$ This set is classically provably a function in ${\mathbb {N} }^{\mathbb {N} }$, designed to take the value $y=0$ for particular inputs $n$. And it can classically be used to prove that the existence of $f$ as a surjection is actually contradictory. However, constructively, unless the proposition $n\in I$ in its definition is decidable so that the set actually defined a functional assignment, we cannot prove this set to be a member of the function space. And so we just cannot draw the classical conclusion. In this fashion, subcountability of ${\mathbb {N} }^{\mathbb {N} }$ is permitted, and indeed models of the theory exist. Nevertheless, also in the case of CZF, the existence of a full surjection ${\mathbb {N} }\twoheadrightarrow {\mathbb {N} }^{\mathbb {N} }$, with domain ${\mathbb {N} }$, is indeed contradictory. The decidable membership of $I={\mathbb {N} }$ makes the set also not countable, i.e. uncountable. Beyond these observations, also note that for any non-zero number $a$, the functions $i\mapsto f(i)(i)+a$ in $I\to {\mathbb {N} }$ involving the surjection $f$ cannot be extended to all of ${\mathbb {N} }$ by a similar contradiction argument. This can be expressed as saying that there are then partial functions that cannot be extended to full functions in ${\mathbb {N} }\to {\mathbb {N} }$. Note that when given a $n\in {\mathbb {N} }$, one cannot necessarily decide whether $n\in I$, and so one cannot even decide whether the value of a potential function extension on $n$ is already determined for the previously characterized surjection $f$. The subcountibility axiom, asserting all sets are subcountable, is incompatible with any new axiom making $I$ countable, including LEM. Models The above analysis affects formal properties of codings of $\mathbb {R} $. Models for the non-classical extension of CZF theory by subcountability postulates have been constructed.[6] Such non-constructive axioms can be seen as choice principles which, however, do not tend to increase the proof-theoretical strengths of the theories much. • There are models of IZF in which all sets with apartness relations are subcountable.[7] • CZF has a model in, for example, the Martin-Löf type theory ${\mathsf {ML_{1}V}}$. In this constructive set theory with classically uncountable function spaces, it is indeed consistent to assert the Subcountability Axiom, saying that every set is subcountable. As discussed, the resulting theory is in contradiction to the axiom of power set and with the law of excluded middle. • More stronger yet, some models of Kripke–Platek set theory, a theory without the function space postulate, even validate that all sets are countable. The notion of size Subcountability as judgement of small size shall not be conflated with the standard mathematical definition of cardinality relations as defined by Cantor, with smaller cardinality being defined in terms of injections and equality of cardinalities being defined in terms of bijections. Constructively, the preorder "$\leq $" on the class of sets fails to be decidable and anti-symmetric. The function space ${\mathbb {N} }^{\mathbb {N} }$ (and also $\{0,1\}^{\mathbb {N} }$) in a moderately rich set theory is always found to be neither finite nor in bijection with ${\mathbb {N} }$, by Cantor's diagonal argument. This is what it means to be uncountable. But the argument that the cardinality of that set would thus in some sense exceed that of the natural numbers relies on a restriction to just the classical size conception and its induced ordering of sets by cardinality. As seen in the example of the function space considered in computability theory, not every infinite subset of ${\mathbb {N} }$ necessarily is in constructive bijection with ${\mathbb {N} }$, thus making room for a more refined distinction between uncountable sets in constructive contexts. Motivated by the above sections, the infinite set ${\mathbb {N} }^{\mathbb {N} }$ may be considered "smaller" than the class ${\mathcal {P}}{\mathbb {N} }$. Related properties A subcountable set has alternatively also been called subcountably indexed. The analogous notion exists in which "$\exists (I\subseteq {\mathbb {N} })$" in the definition is replaced by the existence of a set that is a subset of some finite set. This property is variously called subfinitely indexed. In category theory all these notions are subquotients. See also • Cantor's diagonal argument • Computable function • Constructive set theory • Schröder–Bernstein theorem • Subquotient • Total order References 1. Gert Smolka, Skolems paradox and constructivism, Lecture Notes, Saarland University, Jan. 2015 2. Méhkeri, Daniel (2010), A simple computational interpretation of set theory, arXiv:1005.4380 3. Gitman, Victora (2011), What is the theory ZFC without power set, arXiv:1110.2430 4. Bauer, A. "An injection from N^N to N", 2011 5. Bell, John L. (2004), "Russell's paradox and diagonalization in a constructive context" (PDF), in Link, Godehard (ed.), One hundred years of Russell's paradox, De Gruyter Series in Logic and its Applications, vol. 6, de Gruyter, Berlin, pp. 221–225, MR 2104745 6. Rathjen, Michael (2006), "Choice principles in constructive and classical set theories" (PDF), in Chatzidakis, Zoé; Koepke, Peter; Pohlers, Wolfram (eds.), Logic Colloquium '02: Joint proceedings of the Annual European Summer Meeting of the Association for Symbolic Logic and the Biannual Meeting of the German Association for Mathematical Logic and the Foundations of Exact Sciences (the Colloquium Logicum) held in Münster, August 3–11, 2002, Lecture Notes in Logic, vol. 27, La Jolla, CA: Association for Symbolic Logic, pp. 299–326, MR 2258712 7. McCarty, Charles (1986), "Subcountability under realizability", Notre Dame Journal of Formal Logic, 27 (2): 210–220, doi:10.1305/ndjfl/1093636613, MR 0842149
Wikipedia
Diagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word diagonal derives from the ancient Greek διαγώνιος diagonios,[1] "from angle to angle" (from διά- dia-, "through", "across" and γωνία gonia, "angle", related to gony "knee"); it was used by both Strabo[2] and Euclid[3] to refer to a line connecting two vertices of a rhombus or cuboid,[4] and later adopted into Latin as diagonus ("slanting line"). In matrix algebra, the diagonal of a square matrix consists of the entries on the line from the top left corner to the bottom right corner. There are also many other non-mathematical uses. Non-mathematical uses In engineering, a diagonal brace is a beam used to brace a rectangular structure (such as scaffolding) to withstand strong forces pushing into it; although called a diagonal, due to practical considerations diagonal braces are often not connected to the corners of the rectangle. Diagonal pliers are wire-cutting pliers defined by the cutting edges of the jaws intersects the joint rivet at an angle or "on a diagonal", hence the name. A diagonal lashing is a type of lashing used to bind spars or poles together applied so that the lashings cross over the poles at an angle. In association football, the diagonal system of control is the method referees and assistant referees use to position themselves in one of the four quadrants of the pitch. Polygons As applied to a polygon, a diagonal is a line segment joining any two non-consecutive vertices. Therefore, a quadrilateral has two diagonals, joining opposite pairs of vertices. For any convex polygon, all the diagonals are inside the polygon, but for re-entrant polygons, some diagonals are outside of the polygon. Any n-sided polygon (n ≥ 3), convex or concave, has ${\tfrac {n(n-3)}{2}}$ total diagonals, as each vertex has diagonals to all other vertices except itself and the two adjacent vertices, or n − 3 diagonals, and each diagonal is shared by two vertices. In general, a regular n-sided polygon has $\lfloor {\frac {n-2}{2}}\rfloor $ distinct diagonals in length, which follows the pattern 1,1,2,2,3,3... starting from a square. SidesDiagonals 30 42 55 69 714 820 927 1035 SidesDiagonals 1144 1254 1365 1477 1590 16104 17119 18135 SidesDiagonals 19152 20170 21189 22209 23230 24252 25275 26299 SidesDiagonals 27324 28350 29377 30405 31434 32464 33495 34527 SidesDiagonals 35560 36594 37629 38665 39702 40740 41779 42819 Regions formed by diagonals In a convex polygon, if no three diagonals are concurrent at a single point in the interior, the number of regions that the diagonals divide the interior into is given by ${\binom {n}{4}}+{\binom {n-1}{2}}={\frac {(n-1)(n-2)(n^{2}-3n+12)}{24}}.$ For n-gons with n=3, 4, ... the number of regions is[5] 1, 4, 11, 25, 50, 91, 154, 246... This is OEIS sequence A006522.[6] Intersections of diagonals If no three diagonals of a convex polygon are concurrent at a point in the interior, the number of interior intersections of diagonals is given by ${\binom {n}{4}}$.[7][8] This holds, for example, for any regular polygon with an odd number of sides. The formula follows from the fact that each intersection is uniquely determined by the four endpoints of the two intersecting diagonals: the number of intersections is thus the number of combinations of the n vertices four at a time. Regular polygons See also: Quadrilateral § Diagonals, Hexagon § Convex equilateral hexagon, and Heptagon § Diagonals and heptagonal triangle Although the number of distinct diagonals in a polygon increases as its number of sides increases, the length of any diagonal can be calculated. In a regular n-gon with side length a, the length of the xth shortest distinct diagonal is: $\sin({\frac {\pi (x+1)}{n}})\csc({\frac {\pi }{n}})*a$ This formula shows that as the number of sides approaches infinity, the xth shortest diagonal approaches the length (x+1)a. Additionally, the formula for the shortest diagonal simplifies in the case of x = 1: $\sin({\frac {2\pi }{n}})\csc({\frac {\pi }{n}})*a=2\cos({\frac {\pi }{n}})*a$ If the number of sides is even, the longest diagonal will be equivalent to the diameter of the polygon's circumcircle because the long diagonals all intersect each other at the polygon's center. Special cases include: A square has two diagonals of equal length, which intersect at the center of the square. The ratio of a diagonal to a side is ${\sqrt {2}}\approx 1.414.$ A regular pentagon has five diagonals all of the same length. The ratio of a diagonal to a side is the golden ratio, ${\frac {1+{\sqrt {5}}}{2}}\approx 1.618.$ A regular hexagon has nine diagonals: the six shorter ones are equal to each other in length; the three longer ones are equal to each other in length and intersect each other at the center of the hexagon. The ratio of a long diagonal to a side is 2, and the ratio of a short diagonal to a side is ${\sqrt {3}}$. A regular heptagon has 14 diagonals. The seven shorter ones equal each other, and the seven longer ones equal each other. The reciprocal of the side equals the sum of the reciprocals of a short and a long diagonal. Polyhedrons See also: Face diagonal and Space diagonal A polyhedron (a solid object in three-dimensional space, bounded by two-dimensional faces) may have two different types of diagonals: face diagonals on the various faces, connecting non-adjacent vertices on the same face; and space diagonals, entirely in the interior of the polyhedron (except for the endpoints on the vertices). Higher Dimensions N-Cube The lengths of an n-dimensional hypercube's diagonals can be calculated by mathematical induction. The longest diagonal of an n-cube is ${\sqrt {n}}$. Additionally, there are $2^{n-1}{\binom {n}{x+1}}$ of the xth shortest diagonal. As an example, a 5-cube would have the diagonals: Diagonal LengthNumber of Diagonals ${\sqrt {2}}$160 ${\sqrt {3}}$160 280 ${\sqrt {5}}$16 Its total number of diagonals is 416. In general, an n-cube has a total of $2^{n-1}(2^{n}-n-1)$ diagonals. This follows from the more general form of ${\frac {v(v-1)}{2}}-e$ which describes the total number of face and space diagonals in convex polytopes.[9] Here, v represents the number of vertices and e represents the number of edges. Matrices For a square matrix, the diagonal (or main diagonal or principal diagonal) is the diagonal line of entries running from the top-left corner to the bottom-right corner.[10][11][12] For a matrix $A$ with row index specified by $i$ and column index specified by $j$, these would be entries $A_{ij}$ with $i=j$. For example, the identity matrix can be defined as having entries of 1 on the main diagonal and zeroes elsewhere: ${\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}$ The top-right to bottom-left diagonal is sometimes described as the minor diagonal or antidiagonal. The off-diagonal entries are those not on the main diagonal. A diagonal matrix is one whose off-diagonal entries are all zero.[13][14] A superdiagonal entry is one that is directly above and to the right of the main diagonal.[15][16] Just as diagonal entries are those $A_{ij}$ with $j=i$, the superdiagonal entries are those with $j=i+1$. For example, the non-zero entries of the following matrix all lie in the superdiagonal: ${\begin{pmatrix}0&2&0\\0&0&3\\0&0&0\end{pmatrix}}$ Likewise, a subdiagonal entry is one that is directly below and to the left of the main diagonal, that is, an entry $A_{ij}$ with $j=i-1$.[17] General matrix diagonals can be specified by an index $k$ measured relative to the main diagonal: the main diagonal has $k=0$; the superdiagonal has $k=1$; the subdiagonal has $k=-1$; and in general, the $k$-diagonal consists of the entries $A_{ij}$ with $j=i+k$. Geometry By analogy, the subset of the Cartesian product X×X of any set X with itself, consisting of all pairs (x,x), is called the diagonal, and is the graph of the equality relation on X or equivalently the graph of the identity function from X to X. This plays an important part in geometry; for example, the fixed points of a mapping F from X to itself may be obtained by intersecting the graph of F with the diagonal. In geometric studies, the idea of intersecting the diagonal with itself is common, not directly, but by perturbing it within an equivalence class. This is related at a deep level with the Euler characteristic and the zeros of vector fields. For example, the circle S1 has Betti numbers 1, 1, 0, 0, 0, and therefore Euler characteristic 0. A geometric way of expressing this is to look at the diagonal on the two-torus S1xS1 and observe that it can move off itself by the small motion (θ, θ) to (θ, θ + ε). In general, the intersection number of the graph of a function with the diagonal may be computed using homology via the Lefschetz fixed-point theorem; the self-intersection of the diagonal is the special case of the identity function. See also • Jordan normal form • Main diagonal • Diagonal functor Notes 1. Online Etymology Dictionary 2. Strabo, Geography 2.1.36–37 3. Euclid, Elements book 11, proposition 28 4. Euclid, Elements book 11, proposition 38 5. Weisstein, Eric W. "Polygon Diagonal." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PolygonDiagonal.html 6. Sloane, N. J. A. (ed.). "Sequence A006522". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. 7. Poonen, Bjorn; Rubinstein, Michael. "The number of intersection points made by the diagonals of a regular polygon". SIAM J. Discrete Math. 11 (1998), no. 1, 135–156; link to a version on Poonen's website 8. , beginning at 2:10 9. "Counting Diagonals of a Polyhedron – the Math Doctors". 10. Bronson (1970, p. 2) 11. Herstein (1964, p. 239) 12. Nering (1970, p. 38) 13. Herstein (1964, p. 239) 14. Nering (1970, p. 38) 15. Bronson (1970, pp. 203, 205) 16. Herstein (1964, p. 239) 17. Cullen (1966, p. 114) References • Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 • Cullen, Charles G. (1966), Matrices and Linear Transformations, Reading: Addison-Wesley, LCCN 66021267 • Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016 • Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646 External links Look up diagonal in Wiktionary, the free dictionary. • Diagonals of a polygon with interactive animation • Polygon diagonal from MathWorld. • Diagonal of a matrix from MathWorld.
Wikipedia
Subderivative In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization. Let $f:I\to \mathbb {R} $ be a real-valued convex function defined on an open interval of the real line. Such a function need not be differentiable at all points: For example, the absolute value function $f(x)=|x|$ is non-differentiable when $x=0$. However, as seen in the graph on the right (where $f(x)$ in blue has non-differentiable kinks similar to the absolute value function), for any $x_{0}$ in the domain of the function one can draw a line which goes through the point $(x_{0},f(x_{0}))$ and which is everywhere either touching or below the graph of f. The slope of such a line is called a subderivative. Definition Rigorously, a subderivative of a convex function $f:I\to \mathbb {R} $ at a point $x_{0}$ in the open interval $I$ is a real number $c$ such that $f(x)-f(x_{0})\geq c(x-x_{0})$ for all $x\in I$. By the converse of the mean value theorem, the set of subderivatives at $x_{0}$ for a convex function is a nonempty closed interval $[a,b]$, where $a$ and $b$ are the one-sided limits $a=\lim _{x\to x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}},$ $b=\lim _{x\to x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}.$ The set $[a,b]$ of all subderivatives is called the subdifferential of the function $f$ at $x_{0}$, denoted by $\partial f(x_{0})$. If $f$ is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential at $x_{0}$ contains exactly one subderivative, then $\partial f(x_{0})=\{f'(x_{0})\}$ and $f$ is differentiable at $x_{0}$.[1] Example Consider the function $f(x)=|x|$ which is convex. Then, the subdifferential at the origin is the interval $[-1,1]$. The subdifferential at any point $x_{0}<0$ is the singleton set $\{-1\}$, while the subdifferential at any point $x_{0}>0$ is the singleton set $\{1\}$. This is similar to the sign function, but is not single-valued at $0$, instead including all possible subderivatives. Properties • A convex function $f:I\to \mathbb {R} $ is differentiable at $x_{0}$ if and only if the subdifferential is a singleton set, which is $\{f'(x_{0})\}$. • A point $x_{0}$ is a global minimum of a convex function $f$ if and only if zero is contained in the subdifferential. For instance, in the figure above, one may draw a horizontal "subtangent line" to the graph of $f$ at $(x_{0},f(x_{0}))$. This last property is a generalization of the fact that the derivative of a function differentiable at a local minimum is zero. • If $f$ and $g$ are convex functions with subdifferentials $\partial f(x)$ and $\partial g(x)$ with $x$ being the interior point of one of the functions, then the subdifferential of $f+g$ is $\partial (f+g)(x)=\partial f(x)+\partial g(x)$ (where the addition operator denotes the Minkowski sum). This reads as "the subdifferential of a sum is the sum of the subdifferentials."[2] The subgradient The concepts of subderivative and subdifferential can be generalized to functions of several variables. If $f:U\to \mathbb {R} $ is a real-valued convex function defined on a convex open set in the Euclidean space $\mathbb {R} ^{n}$, a vector $v$ in that space is called a subgradient at $x_{0}\in U$ if for any $x\in U$ one has that $f(x)-f(x_{0})\geq v\cdot (x-x_{0}),$ where the dot denotes the dot product. The set of all subgradients at $x_{0}$ is called the subdifferential at x0 and is denoted $\partial f(x_{0})$. The subdifferential is always a nonempty convex compact set. These concepts generalize further to convex functions $f:U\to \mathbb {R} $ on a convex set in a locally convex space $V$. A functional $v^{*}$ in the dual space $V^{*}$ is called the subgradient at $x_{0}$ in $U$ if for all $x\in U$, $f(x)-f(x_{0})\geq v^{*}(x-x_{0}).$ The set of all subgradients at $x_{0}$ is called the subdifferential at $x_{0}$ and is again denoted $\partial f(x_{0})$. The subdifferential is always a convex closed set. It can be an empty set; consider for example an unbounded operator, which is convex, but has no subgradient. If $f$ is continuous, the subdifferential is nonempty. History The subdifferential on convex functions was introduced by Jean Jacques Moreau and R. Tyrrell Rockafellar in the early 1960s. The generalized subdifferential for nonconvex functions was introduced by F.H. Clarke and R.T. Rockafellar in the early 1980s.[3] See also • Weak derivative • Subgradient method References 1. Rockafellar, R. T. (1970). Convex Analysis. Princeton University Press. p. 242 [Theorem 25.1]. ISBN 0-691-08069-0. 2. Lemaréchal, Claude; Hiriart-Urruty, Jean-Baptiste (2001). Fundamentals of Convex Analysis. Springer-Verlag Berlin Heidelberg. p. 183. ISBN 978-3-642-56468-0. 3. Clarke, Frank H. (1983). Optimization and nonsmooth analysis. New York: John Wiley & Sons. pp. xiii+308. ISBN 0-471-87504-X. MR 0709590. • Borwein, Jonathan; Lewis, Adrian S. (2010). Convex Analysis and Nonlinear Optimization : Theory and Examples (2nd ed.). New York: Springer. ISBN 978-0-387-31256-9. • Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (2001). Fundamentals of Convex Analysis. Springer. ISBN 3-540-42205-6. • Zălinescu, C. (2002). Convex analysis in general vector spaces. World Scientific Publishing  Co., Inc. pp. xx+367. ISBN 981-238-067-1. MR 1921556. External links • "Uses of $\lim \limits _{h\to 0}{\frac {f(x+h)-f(x-h)}{2h}}$". Stack Exchange. September 18, 2011.
Wikipedia
Finite subdivision rule In mathematics, a finite subdivision rule is a recursive way of dividing a polygon or other two-dimensional shape into smaller and smaller pieces. Subdivision rules in a sense are generalizations of regular geometric fractals. Instead of repeating exactly the same design over and over, they have slight variations in each stage, allowing a richer structure while maintaining the elegant style of fractals.[1] Subdivision rules have been used in architecture, biology, and computer science, as well as in the study of hyperbolic manifolds. Substitution tilings are a well-studied type of subdivision rule. Definition A subdivision rule takes a tiling of the plane by polygons and turns it into a new tiling by subdividing each polygon into smaller polygons. It is finite if there are only finitely many ways that every polygon can subdivide. Each way of subdividing a tile is called a tile type. Each tile type is represented by a label (usually a letter). Every tile type subdivides into smaller tile types. Each edge also gets subdivided according to finitely many edge types. Finite subdivision rules can only subdivide tilings that are made up of polygons labelled by tile types. Such tilings are called subdivision complexes for the subdivision rule. Given any subdivision complex for a subdivision rule, we can subdivide it over and over again to get a sequence of tilings. For instance, binary subdivision has one tile type and one edge type: Since the only tile type is a quadrilateral, binary subdivision can only subdivide tilings made up of quadrilaterals. This means that the only subdivision complexes are tilings by quadrilaterals. The tiling can be regular, but doesn't have to be: Here we start with a complex made of four quadrilaterals and subdivide it twice. All quadrilaterals are type A tiles. Examples of finite subdivision rules Barycentric subdivision is an example of a subdivision rule with one edge type (that gets subdivided into two edges) and one tile type (a triangle that gets subdivided into 6 smaller triangles). Any triangulated surface is a barycentric subdivision complex.[1] The Penrose tiling can be generated by a subdivision rule on a set of four tile types (the curved lines in the table below only help to show how the tiles fit together): Name Initial tiles Generation 1 Generation 2 Generation 3 Half-kite Half-dart Sun Star Certain rational maps give rise to finite subdivision rules.[2] This includes most Lattès maps.[3] Every prime, non-split alternating knot or link complement has a subdivision rule, with some tiles that do not subdivide, corresponding to the boundary of the link complement.[4] The subdivision rules show what the night sky would look like to someone living in a knot complement; because the universe wraps around itself (i.e. is not simply connected), an observer would see the visible universe repeat itself in an infinite pattern. The subdivision rule describes that pattern. The subdivision rule looks different for different geometries. This is a subdivision rule for the trefoil knot, which is not a hyperbolic knot: And this is the subdivision rule for the Borromean rings, which is hyperbolic: In each case, the subdivision rule would act on some tiling of a sphere (i.e. the night sky), but it is easier to just draw a small part of the night sky, corresponding to a single tile being repeatedly subdivided. This is what happens for the trefoil knot: And for the Borromean rings: Subdivision rules in higher dimensions Subdivision rules can easily be generalized to other dimensions.[5] For instance, barycentric subdivision is used in all dimensions. Also, binary subdivision can be generalized to other dimensions (where hypercubes get divided by every midplane), as in the proof of the Heine–Borel theorem. Rigorous definition A finite subdivision rule $R$ consists of the following.[1] 1. A finite 2-dimensional CW complex $S_{R}$, called the subdivision complex, with a fixed cell structure such that $S_{R}$ is the union of its closed 2-cells. We assume that for each closed 2-cell ${\tilde {s}}$ of $S_{R}$ there is a CW structure $s$ on a closed 2-disk such that $s$ has at least two vertices, the vertices and edges of $s$ are contained in $\partial s$, and the characteristic map $\psi _{s}:s\rightarrow S_{R}$ which maps onto ${\tilde {s}}$ restricts to a homeomorphism onto each open cell. 2. A finite two dimensional CW complex $R(S_{R})$, which is a subdivision of $S_{R}$. 3.A continuous cellular map $\phi _{R}:R(S_{R})\rightarrow S_{R}$ called the subdivision map, whose restriction to every open cell is a homeomorphism onto an open cell. Each CW complex $s$ in the definition above (with its given characteristic map $\psi _{s}$) is called a tile type. An $R$-complex for a subdivision rule $R$ is a 2-dimensional CW complex $X$ which is the union of its closed 2-cells, together with a continuous cellular map $f:X\rightarrow S_{R}$ whose restriction to each open cell is a homeomorphism. We can subdivide $X$ into a complex $R(X)$ by requiring that the induced map $f:R(X)\rightarrow R(S_{R})$ restricts to a homeomorphism onto each open cell. $R(X)$ is again an $R$-complex with map $\phi _{R}\circ f:R(X)\rightarrow S_{R}$. By repeating this process, we obtain a sequence of subdivided $R$-complexes $R^{n}(X)$ with maps $\phi _{R}^{n}\circ f:R^{n}(X)\rightarrow S_{R}$. Binary subdivision is one example:[6] The subdivision complex can be created by gluing together the opposite edges of the square, making the subdivision complex $S_{R}$ into a torus. The subdivision map $\phi $ is the doubling map on the torus, wrapping the meridian around itself twice and the longitude around itself twice. This is a four-fold covering map. The plane, tiled by squares, is a subdivision complex for this subdivision rule, with the structure map $f:\mathbb {R} ^{2}\rightarrow R(S_{R})$ given by the standard covering map. Under subdivision, each square in the plane gets subdivided into squares of one-fourth the size. Quasi-isometry properties Subdivision rules can be used to study the quasi-isometry properties of certain spaces.[7] Given a subdivision rule $R$ and subdivision complex $X$, we can construct a graph called the history graph that records the action of the subdivision rule. The graph consists of the dual graphs of every stage $R^{n}(X)$, together with edges connecting each tile in $R^{n}(X)$ with its subdivisions in $R^{n+1}(X)$. The quasi-isometry properties of the history graph can be studied using subdivision rules. For instance, the history graph is quasi-isometric to hyperbolic space exactly when the subdivision rule is conformal, as described in the combinatorial Riemann mapping theorem.[7] Applications Applications of subdivision rules. An example of a subdivision rule used in the Islamic art known as girih. First three steps of Catmull-Clark subdivision of a cube with subdivision surface below. The branching nature of bronchi may be modelled by finite subdivision rules. Islamic Girih tiles in Islamic architecture are self-similar tilings that can be modeled with finite subdivision rules.[8] In 2007, Peter J. Lu of Harvard University and Professor Paul J. Steinhardt of Princeton University published a paper in the journal Science suggesting that girih tilings possessed properties consistent with self-similar fractal quasicrystalline tilings such as Penrose tilings (presentation 1974, predecessor works starting in about 1964) predating them by five centuries.[8] Subdivision surfaces in computer graphics use subdivision rules to refine a surface to any given level of precision. These subdivision surfaces (such as the Catmull-Clark subdivision surface) take a polygon mesh (the kind used in 3D animated movies) and refines it to a mesh with more polygons by adding and shifting points according to different recursive formulas.[9] Although many points get shifted in this process, each new mesh is combinatorially a subdivision of the old mesh (meaning that for every edge and vertex of the old mesh, you can identify a corresponding edge and vertex in the new one, plus several more edges and vertices). Subdivision rules were applied by Cannon, Floyd and Parry (2000) to the study of large-scale growth patterns of biological organisms.[6] Cannon, Floyd and Parry produced a mathematical growth model which demonstrated that some systems determined by simple finite subdivision rules can results in objects (in their example, a tree trunk) whose large-scale form oscillates wildly over time even though the local subdivision laws remain the same.[6] Cannon, Floyd and Parry also applied their model to the analysis of the growth patterns of rat tissue.[6] They suggested that the "negatively curved" (or non-euclidean) nature of microscopic growth patterns of biological organisms is one of the key reasons why large-scale organisms do not look like crystals or polyhedral shapes but in fact in many cases resemble self-similar fractals.[6] In particular they suggested that such "negatively curved" local structure is manifested in highly folded and highly connected nature of the brain and the lung tissue.[6] Cannon's conjecture Cannon, Floyd, and Parry first studied finite subdivision rules in an attempt to prove the following conjecture: Cannon's conjecture: Every Gromov hyperbolic group with a 2-sphere at infinity acts geometrically on hyperbolic 3-space.[7] Here, a geometric action is a cocompact, properly discontinuous action by isometries. This conjecture was partially solved by Grigori Perelman in his proof[10][11][12] of the geometrization conjecture, which states (in part) than any Gromov hyperbolic group that is a 3-manifold group must act geometrically on hyperbolic 3-space. However, it still remains to show that a Gromov hyperbolic group with a 2-sphere at infinity is a 3-manifold group. Cannon and Swenson showed [13] that a hyperbolic group with a 2-sphere at infinity has an associated subdivision rule. If this subdivision rule is conformal in a certain sense, the group will be a 3-manifold group with the geometry of hyperbolic 3-space.[7] Combinatorial Riemann mapping theorem Subdivision rules give a sequence of tilings of a surface, and tilings give an idea of distance, length, and area (by letting each tile have length and area 1). In the limit, the distances that come from these tilings may converge in some sense to an analytic structure on the surface. The Combinatorial Riemann Mapping Theorem gives necessary and sufficient conditions for this to occur.[7] Its statement needs some background. A tiling $T$ of a ring $R$ (i.e., a closed annulus) gives two invariants, $M_{\sup }(R,T)$ and $m_{\inf }(R,T)$, called approximate moduli. These are similar to the classical modulus of a ring. They are defined by the use of weight functions. A weight function $\rho $ assigns a non-negative number called a weight to each tile of $T$. Every path in $R$ can be given a length, defined to be the sum of the weights of all tiles in the path. Define the height $H(\rho )$ of $R$ under $\rho $ to be the infimum of the length of all possible paths connecting the inner boundary of $R$ to the outer boundary. The circumference $C(\rho )$ of $R$ under $\rho $ is the infimum of the length of all possible paths circling the ring (i.e. not nullhomotopic in R). The area$A(\rho )$ of $R$ under $\rho $ is defined to be the sum of the squares of all weights in $R$. Then define $M_{\sup }(R,T)=\sup {\frac {H(\rho )^{2}}{A(\rho )}},$ $m_{\inf }(R,T)=\inf {\frac {A(\rho )}{C(\rho )^{2}}}.$ Note that they are invariant under scaling of the metric. A sequence $T_{1},T_{2},\ldots $ of tilings is conformal ($K$) if mesh approaches 0 and: 1. For each ring $R$, the approximate moduli $M_{\sup }(R,T_{i})$ and $m_{\inf }(R,T_{i})$, for all $i$ sufficiently large, lie in a single interval of the form $[r,Kr]$; and 2. Given a point $x$ in the surface, a neighborhood $N$ of $x$, and an integer $I$, there is a ring $R$ in $N\smallsetminus \{x\}$ separating x from the complement of $N$, such that for all large $i$ the approximate moduli of $R$ are all greater than $I$.[7] Statement of theorem If a sequence $T_{1},T_{2},\ldots $ of tilings of a surface is conformal ($K$) in the above sense, then there is a conformal structure on the surface and a constant $K'$ depending only on $K$ in which the classical moduli and approximate moduli (from $T_{i}$ for $i$ sufficiently large) of any given annulus are $K'$-comparable, meaning that they lie in a single interval $[r,K'r]$.[7] Consequences The Combinatorial Riemann Mapping Theorem implies that a group $G$ acts geometrically on $\mathbb {H} ^{3}$ if and only if it is Gromov hyperbolic, it has a sphere at infinity, and the natural subdivision rule on the sphere gives rise to a sequence of tilings that is conformal in the sense above. Thus, Cannon's conjecture would be true if all such subdivision rules were conformal.[13] References 1. J. W. Cannon, W. J. Floyd, W. R. Parry. Finite subdivision rules. Conformal Geometry and Dynamics, vol. 5 (2001), pp. 153–196. 2. J. W. Cannon, W. J. Floyd, W. R. Parry. Constructing subdivision rules from rational maps. Conformal Geometry and Dynamics, vol. 11 (2007), pp. 128–136. 3. J. W. Cannon, W. J. Floyd, W. R. Parry. Lattès maps and subdivision rules. Conformal Geometry and Dynamics, vol. 14 (2010, pp. 113–140. 4. B. Rushton. Constructing subdivision rules from alternating links. Conform. Geom. Dyn. 14 (2010), 1–13. 5. Rushton, B. (2012). "A finite subdivision rule for the n-dimensional torus". Geometriae Dedicata. 167: 23–34. arXiv:1110.3310. doi:10.1007/s10711-012-9802-5. S2CID 119145306. 6. J. W. Cannon, W. Floyd and W. Parry. Crystal growth, biological cell growth and geometry. Pattern Formation in Biology, Vision and Dynamics, pp. 65–82. World Scientific, 2000. ISBN 981-02-3792-8, ISBN 978-981-02-3792-9. 7. James W. Cannon. The combinatorial Riemann mapping theorem. Acta Mathematica 173 (1994), no. 2, pp. 155–234. 8. Lu, Peter J.; Steinhardt, Paul J. (2007). "Decagonal and Quasi-crystalline Tilings in Medieval Islamic Architecture" (PDF). Science. 315 (5815): 1106–1110. Bibcode:2007Sci...315.1106L. doi:10.1126/science.1135491. PMID 17322056. S2CID 10374218. Archived from the original (PDF) on 2009-10-07. "Supporting Online Material" (PDF). Archived from the original (PDF) on 2009-03-26. 9. D. Zorin. Subdivisions on arbitrary meshes: algorithms and theory. Institute of Mathematical Sciences (Singapore) Lecture Notes Series. 2006. 10. Perelman, Grisha (11 November 2002). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math.DG/0211159. 11. Perelman, Grisha (10 March 2003). "Ricci flow with surgery on three-manifolds". arXiv:math.DG/0303109. 12. Perelman, Grisha (17 July 2003). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math.DG/0307245. 13. J. W. Cannon and E. L. Swenson, Recognizing constant curvature discrete groups in dimension 3. Transactions of the American Mathematical Society 350 (1998), no. 2, pp. 809–849. External links • Bill Floyd's research page. This page contains most of the research papers by Cannon, Floyd and Parry on subdivision rules, as well as a gallery of subdivision rules. Fractals Characteristics • Fractal dimensions • Assouad • Box-counting • Higuchi • Correlation • Hausdorff • Packing • Topological • Recursion • Self-similarity Iterated function system • Barnsley fern • Cantor set • Koch snowflake • Menger sponge • Sierpinski carpet • Sierpinski triangle • Apollonian gasket • Fibonacci word • Space-filling curve • Blancmange curve • De Rham curve • Minkowski • Dragon curve • Hilbert curve • Koch curve • Lévy C curve • Moore curve • Peano curve • Sierpiński curve • Z-order curve • String • T-square • n-flake • Vicsek fractal • Hexaflake • Gosper curve • Pythagoras tree • Weierstrass function Strange attractor • Multifractal system L-system • Fractal canopy • Space-filling curve • H tree Escape-time fractals • Burning Ship fractal • Julia set • Filled • Newton fractal • Douady rabbit • Lyapunov fractal • Mandelbrot set • Misiurewicz point • Multibrot set • Newton fractal • Tricorn • Mandelbox • Mandelbulb Rendering techniques • Buddhabrot • Orbit trap • Pickover stalk Random fractals • Brownian motion • Brownian tree • Brownian motor • Fractal landscape • Lévy flight • Percolation theory • Self-avoiding walk People • Michael Barnsley • Georg Cantor • Bill Gosper • Felix Hausdorff • Desmond Paul Henry • Gaston Julia • Helge von Koch • Paul Lévy • Aleksandr Lyapunov • Benoit Mandelbrot • Hamid Naderi Yeganeh • Lewis Fry Richardson • Wacław Sierpiński Other • "How Long Is the Coast of Britain?" • Coastline paradox • Fractal art • List of fractals by Hausdorff dimension • The Fractal Geometry of Nature (1982 book) • The Beauty of Fractals (1986 book) • Chaos: Making a New Science (1987 book) • Kaleidoscope • Chaos theory Major mathematics areas • History • Timeline • Future • Outline • Lists • Glossary Foundations • Category theory • Information theory • Mathematical logic • Philosophy of mathematics • Set theory • Type theory Algebra • Abstract • Commutative • Elementary • Group theory • Linear • Multilinear • Universal • Homological Analysis • Calculus • Real analysis • Complex analysis • Hypercomplex analysis • Differential equations • Functional analysis • Harmonic analysis • Measure theory Discrete • Combinatorics • Graph theory • Order theory Geometry • Algebraic • Analytic • Arithmetic • Differential • Discrete • Euclidean • Finite Number theory • Arithmetic • Algebraic number theory • Analytic number theory • Diophantine geometry Topology • General • Algebraic • Differential • Geometric • Homotopy theory Applied • Engineering mathematics • Mathematical biology • Mathematical chemistry • Mathematical economics • Mathematical finance • Mathematical physics • Mathematical psychology • Mathematical sociology • Mathematical statistics • Probability • Statistics • Systems science • Control theory • Game theory • Operations research Computational • Computer science • Theory of computation • Computational complexity theory • Numerical analysis • Optimization • Computer algebra Related topics • Mathematicians • lists • Informal mathematics • Films about mathematicians • Recreational mathematics • Mathematics and art • Mathematics education •  Mathematics portal • Category • Commons • WikiProject
Wikipedia
Subfactor In the theory of von Neumann algebras, a subfactor of a factor $M$ is a subalgebra that is a factor and contains $1$. The theory of subfactors led to the discovery of the Jones polynomial in knot theory. Index of a subfactor Usually $M$ is taken to be a factor of type ${\rm {II}}_{1}$, so that it has a finite trace. In this case every Hilbert space module $H$ has a dimension $\dim _{M}(H)$ which is a non-negative real number or $+\infty $. The index $[M:N]$ of a subfactor $N$ is defined to be $\dim _{N}(L^{2}(M))$. Here $L^{2}(M)$ is the representation of $N$ obtained from the GNS construction of the trace of $M$. Jones index theorem This states that if $N$ is a subfactor of $M$ (both of type ${\rm {II}}_{1}$) then the index $[M:N]$ is either of the form $4cos(\pi /n)^{2}$ for $n=3,4,5,...$, or is at least $4$. All these values occur. The first few values of $4\cos(\pi /n)^{2}$ are $1,2,(3+{\sqrt {5}})/2=2.618...,3,3.247...,...$ Basic construction Suppose that $N$ is a subfactor of $M$, and that both are finite von Neumann algebras. The GNS construction produces a Hilbert space $L^{2}(M)$ acted on by $M$ with a cyclic vector $\Omega $. Let $e_{N}$ be the projection onto the subspace $N\Omega $. Then $M$ and $e_{N}$ generate a new von Neumann algebra $\langle M,e_{N}\rangle $ acting on $L^{2}(M)$, containing $M$ as a subfactor. The passage from the inclusion of $N$ in $M$ to the inclusion of $M$ in $\langle M,e_{N}\rangle $ is called the basic construction. If $N$ and $M$ are both factors of type ${\rm {II}}_{1}$ and $N$ has finite index in $M$ then $\langle M,e_{N}\rangle $ is also of type ${\rm {II}}_{1}$. Moreover the inclusions have the same index: $[M:N]=[\langle M,e_{N}\rangle :M],$ and $tr_{\langle M,e_{N}\rangle }(e_{N})=[M:N]^{-1}$. Jones tower Suppose that $N\subset M$ is an inclusion of type ${\rm {II}}_{1}$ factors of finite index. By iterating the basic construction we get a tower of inclusions $M_{-1}\subset M_{0}\subset M_{1}\subset M_{2}\subset \cdots $ where $M_{-1}=N$ and $M_{0}=M$, and each $M_{n+1}=\langle M_{n},e_{n+1}\rangle $ is generated by the previous algebra and a projection. The union of all these algebras has a tracial state $tr$ whose restriction to each $M_{n}$ is the tracial state, and so the closure of the union is another type ${\rm {II}}_{1}$ von Neumann algebra $M_{\infty }$. The algebra $M_{\infty }$ contains a sequence of projections $e_{1},e_{2},e_{3},...,$ which satisfy the Temperley–Lieb relations at parameter $\lambda =[M:N]^{-1}$. Moreover, the algebra generated by the $e_{n}$ is a ${\rm {C}}^{\star }$-algebra in which the $e_{n}$ are self-adjoint, and such that $tr(xe_{n})=\lambda tr(x)$ when $x$ is in the algebra generated by $e_{1}$ up to $e_{n-1}$. Whenever these extra conditions are satisfied, the algebra is called a Temperly–Lieb–Jones algebra at parameter $\lambda $. It can be shown to be unique up to $\star $-isomorphism. It exists only when $\lambda $ takes on those special values $4cos(\pi /n)^{2}$ for $n=3,4,5,...$, or the values larger than $4$. Standard invariant Suppose that $N\subset M$ is an inclusion of type ${\rm {II}}_{1}$ factors of finite index. Let the higher relative commutants be ${\mathcal {P}}_{n,+}=N'\cap M_{n-1}$ and ${\mathcal {P}}_{n,-}=M'\cap M_{n}$. The standard invariant of the subfactor $N\subset M$ is the following grid: $\mathbb {C} ={\mathcal {P}}_{0,+}\subset {\mathcal {P}}_{1,+}\subset {\mathcal {P}}_{2,+}\subset \cdots \subset {\mathcal {P}}_{n,+}\subset \cdots $ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cup \ \ \ \ \ \ \ \ \ \cup \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cup $ $\ \ \ \ \ \ \ \ \ \ \ \ \mathbb {C} \ ={\mathcal {P}}_{0,-}\subset {\mathcal {P}}_{1,-}\subset \cdots \subset {\mathcal {P}}_{n-1,-}\subset \cdots $ which is a complete invariant in the amenable case.[1] A diagrammatic axiomatization of the standard invariant is given by the notion of planar algebra. Principal graphs A subfactor of finite index $N\subset M$ is said to be irreducible if either of the following equivalent conditions is satisfied: • $L^{2}(M)$ is irreducible as an $(N,M)$ bimodule; • the relative commutant $N'\cap M$ is $\mathbb {C} $. In this case $L^{2}(M)$ defines a $(N,M)$ bimodule $X$ as well as its conjugate $(M,N)$ bimodule $X^{\star }$. The relative tensor product, described in Jones (1983) and often called Connes fusion after a prior definition for general von Neumann algebras of Alain Connes, can be used to define new bimodules over $(N,M)$, $(M,N)$, $(M,M)$ and $(N,N)$ by decomposing the following tensor products into irreducible components: $X\boxtimes X^{\star }\boxtimes \cdots \boxtimes X,\,\,X^{\star }\boxtimes X\boxtimes \cdots \boxtimes X^{\star },\,\,X^{\star }\boxtimes X\boxtimes \cdots \boxtimes X,\,\,X\boxtimes X^{\star }\boxtimes \cdots \boxtimes X^{\star }.$ The irreducible $(M,M)$ and $(M,N)$ bimodules arising in this way form the vertices of the principal graph, a bipartite graph. The directed edges of these graphs describe the way an irreducible bimodule decomposes when tensored with $X$ and $X^{\star }$ on the right. The dual principal graph is defined in a similar way using $(N,N)$ and $(N,M)$ bimodules. Since any bimodule corresponds to the commuting actions of two factors, each factor is contained in the commutant of the other and therefore defines a subfactor. When the bimodule is irreducible, its dimension is defined to be the square root of the index of this subfactor. The dimension is extended additively to direct sums of irreducible bimodules. It is multiplicative with respect to Connes fusion. The subfactor is said to have finite depth if the principal graph and its dual are finite, i.e. if only finitely many irreducible bimodules occur in these decompositions. In this case if $M$ and $N$ are hyperfinite, Sorin Popa showed that the inclusion $N\subset M$ is isomorphic to the model $(\mathbb {C} \otimes \mathrm {End} \,X^{\star }\boxtimes X\boxtimes X^{\star }\boxtimes \cdots )^{\prime \prime }\subset (\mathrm {End} \,X\boxtimes X^{\star }\boxtimes X\boxtimes X^{\star }\boxtimes \cdots )^{\prime \prime },$ where the ${\rm {II}}_{1}$ factors are obtained from the GNS construction with respect to the canonical trace. Knot polynomials The algebra generated by the elements $e_{n}$ with the relations above is called the Temperley–Lieb algebra. This is a quotient of the group algebra of the braid group, so representations of the Temperley–Lieb algebra give representations of the braid group, which in turn often give invariants for knots. References 1. Popa, Sorin (1994), "Classification of amenable subfactors of type II", Acta Mathematica, 172 (2): 163–255, doi:10.1007/BF02392646, MR 1278111 • Jones, Vaughan F.R. (1983), "Index for subfactors", Inventiones Mathematicae, 72: 1–25, doi:10.1007/BF01389127 • Wenzl, H.G. (1988), "Hecke algebras of type An and subfactors", Invent. Math., 92 (2): 349–383, doi:10.1007/BF01404457, MR 0696688 • Jones, Vaughan F.R.; Sunder, Viakalathur Shankar (1997). Introduction to subfactors. London Mathematical Society Lecture Note Series. Vol. 234. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511566219. ISBN 0-521-58420-5. MR 1473221. • Theory of Operator Algebras III by M. Takesaki ISBN 3-540-42913-1 • Wassermann, Antony. "Operators on Hilbert space".
Wikipedia
Derangement In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points. The number of derangements of a set of size n is known as the subfactorial of n or the n-th derangement number or n-th de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include !n, Dn, dn, or n¡.[1][2] For n > 0, the subfactorial !n equals the nearest integer to n!/e, where n! denotes the factorial of n and e is Euler's number.[3] The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard.[4] in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time. Example Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student received their own test back? Out of 24 possible permutations (4!) for handing back the tests, ABCD, ABDC, ACBD, ACDB, ADBC, ADCB, BACD, BADC, BCAD, BCDA, BDAC, BDCA, CABD, CADB, CBAD, CBDA, CDAB, CDBA, DABC, DACB, DBAC, DBCA, DCAB, DCBA. there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red). Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope. Counting derangements Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner.[5] Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider hi's owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases: 1. Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj. 2. Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received hi's hat and Pi received h1's hat, effectively putting both out of further consideration. For each of the n − 1 hats that P1 may receive, the number of ways that P2, …, Pn may all receive hats is the sum of the counts for the two cases. This gives us the solution to the hat-check problem: stated algebraically, the number !n of derangements of an n-element set is $!n=(n-1)\cdot ({!(n-1)}+{!(n-2)})$ for $n\geq 2$, where $!0=1$ and $!1=0$.[6] The number of derangements of small lengths is given in the table below. The number of derangements of an n-element set (sequence A000166 in the OEIS) for small n n 012345678910111213  !n 10129442651,85414,833133,4961,334,96114,684,570176,214,8412,290,792,932 There are various other expressions for !n, equivalent to the formula given above. These include $!n=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}$ for $n\geq 0$ and $!n=\left[{\frac {n!}{e}}\right]=\left\lfloor {\frac {n!}{e}}+{\frac {1}{2}}\right\rfloor $ for $n\geq 1,$ where $\left[x\right]$ is the nearest integer function and $\left\lfloor x\right\rfloor $ is the floor function.[3][6] Other related formulas include[7] $!n=\left\lfloor {\frac {n!+1}{e}}\right\rfloor ,\quad \ n\geq 1,$ $!n=\left\lfloor \left(e+e^{-1}\right)n!\right\rfloor -\lfloor en!\rfloor ,\quad n\geq 2,$ and $!n=n!-\sum _{i=1}^{n}{n \choose i}\cdot {!(n-i)},\quad \ n\geq 1.$ The following recurrence also holds:[6] $!n={\begin{cases}1&{\text{if }}n=0,\\n\cdot \left(!(n-1)\right)+(-1)^{n}&{\text{if }}n>0.\end{cases}}$ Derivation by inclusion–exclusion principle One may derive a non-recursive formula for the number of derangements of an n-set, as well. For $1\leq k\leq n$ we define $S_{k}$ to be the set of permutations of n objects that fix the $k$-th object. Any intersection of a collection of i of these sets fixes a particular set of i objects and therefore contains $(n-i)!$ permutations. There are $ {n \choose i}$ such collections, so the inclusion–exclusion principle yields ${\begin{aligned}|S_{1}\cup \dotsm \cup S_{n}|&=\sum _{i}\left|S_{i}\right|-\sum _{i<j}\left|S_{i}\cap S_{j}\right|+\sum _{i<j<k}\left|S_{i}\cap S_{j}\cap S_{k}\right|+\cdots +(-1)^{n+1}\left|S_{1}\cap \dotsm \cap S_{n}\right|\\&={n \choose 1}(n-1)!-{n \choose 2}(n-2)!+{n \choose 3}(n-3)!-\cdots +(-1)^{n+1}{n \choose n}0!\\&=\sum _{i=1}^{n}(-1)^{i+1}{n \choose i}(n-i)!\\&=n!\ \sum _{i=1}^{n}{(-1)^{i+1} \over i!},\end{aligned}}$ and since a derangement is a permutation that leaves none of the n objects fixed, this implies $!n=n!-\left|S_{1}\cup \dotsm \cup S_{n}\right|=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}.$ Growth of number of derangements as n approaches ∞ From $!n=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}$ and $e^{x}=\sum _{i=0}^{\infty }{x^{i} \over i!}$ by substituting $\textstyle x=-1$ one immediately obtains that $\lim _{n\to \infty }{!n \over n!}=\lim _{n\to \infty }\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}=e^{-1}\approx 0.367879\ldots .$ This is the limit of the probability that a randomly selected permutation of a large number of objects is a derangement. The probability converges to this limit extremely quickly as n increases, which is why !n is the nearest integer to n!/e. The above semi-log graph shows that the derangement graph lags the permutation graph by an almost constant value. More information about this calculation and the above limit may be found in the article on the statistics of random permutations. Asymptotic expansion in terms of Bell numbers An asymptotic expansion for the number of derangements in terms of Bell numbers is as follows: $!n={\frac {n!}{e}}+\sum _{k=1}^{m}\left(-1\right)^{n+k-1}{\frac {B_{k}}{n^{k}}}+O\left({\frac {1}{n^{m+1}}}\right),$ where $m$ is any fixed positive integer, and $B_{k}$ denotes the $k$-th Bell number. Moreover, the constant implied by the big O-term does not exceed $B_{m+1}$.[8] Generalizations The problème des rencontres asks how many permutations of a size-n set have exactly k fixed points. Derangements are an example of the wider field of constrained permutations. For example, the ménage problem asks if n opposite-sex couples are seated man-woman-man-woman-... around a table, how many ways can they be seated so that nobody is seated next to his or her partner? More formally, given sets A and S, and some sets U and V of surjections A → S, we often wish to know the number of pairs of functions (f, g) such that f is in U and g is in V, and for all a in A, f(a) ≠ g(a); in other words, where for each f and g, there exists a derangement φ of S such that f(a) = φ(g(a)). Another generalization is the following problem: How many anagrams with no fixed letters of a given word are there? For instance, for a word made of only two different letters, say n letters A and m letters B, the answer is, of course, 1 or 0 according to whether n = m or not, for the only way to form an anagram without fixed letters is to exchange all the A with B, which is possible if and only if n = m. In the general case, for a word with n1 letters X1, n2 letters X2, ..., nr letters Xr, it turns out (after a proper use of the inclusion-exclusion formula) that the answer has the form $\int _{0}^{\infty }P_{n_{1}}(x)P_{n_{2}}(x)\cdots P_{n_{r}}(x)e^{-x}\,dx,$ for a certain sequence of polynomials Pn, where Pn has degree n. But the above answer for the case r = 2 gives an orthogonality relation, whence the Pn's are the Laguerre polynomials (up to a sign that is easily decided).[9] In particular, for the classical derangements, one has that $!n={\frac {\Gamma (n+1,-1)}{e}}=\int _{0}^{\infty }(x-1)^{n}e^{-x}dx$ where $\Gamma (s,x)$ is the upper incomplete gamma function. Computational complexity It is NP-complete to determine whether a given permutation group (described by a given set of permutations that generate it) contains any derangements.[10] References 1. The name "subfactorial" originates with William Allen Whitworth; see Cajori, Florian (2011), A History of Mathematical Notations: Two Volumes in One, Cosimo, Inc., p. 77, ISBN 9781616405717. 2. Ronald L. Graham, Donald E. Knuth, Oren Patashnik, Concrete Mathematics (1994), Addison–Wesley, Reading MA. ISBN 0-201-55802-5 3. Hassani, M. "Derangements and Applications." J. Integer Seq. 6, No. 03.1.2, 1–8, 2003 4. de Montmort, P. R. (1708). Essay d'analyse sur les jeux de hazard. Paris: Jacque Quillau. Seconde Edition, Revue & augmentée de plusieurs Lettres. Paris: Jacque Quillau. 1713. 5. Scoville, Richard (1966). "The Hat-Check Problem". American Mathematical Monthly. 73 (3): 262–265. doi:10.2307/2315337. JSTOR 2315337. 6. Stanley, Richard (2012). Enumerative Combinatorics, volume 1 (2 ed.). Cambridge University Press. Example 2.2.1. ISBN 978-1-107-60262-5. 7. Weisstein, Eric W. "Subfactorial". MathWorld. 8. Hassani, M. "Derangements and Alternating Sum of Permutations by Integration." J. Integer Seq. 23, Article 20.7.8, 1–9, 2020 9. Even, S.; J. Gillis (1976). "Derangements and Laguerre polynomials". Mathematical Proceedings of the Cambridge Philosophical Society. 79 (1): 135–143. Bibcode:1976MPCPS..79..135E. doi:10.1017/S0305004100052154. S2CID 122311800. Retrieved 27 December 2011. 10. Lubiw, Anna (1981), "Some NP-complete problems similar to graph isomorphism", SIAM Journal on Computing, 10 (1): 11–21, doi:10.1137/0210002, MR 0605600. Babai, László (1995), "Automorphism groups, isomorphism, reconstruction", Handbook of combinatorics, Vol. 1, 2 (PDF), Amsterdam: Elsevier, pp. 1447–1540, MR 1373683, A surprising result of Anna Lubiw asserts that the following problem is NP-complete: Does a given permutation group have a fixed-point-free element?. External links Look up derangement in Wiktionary, the free dictionary. • Baez, John (2003). "Let's get deranged!" (PDF). • Bogart, Kenneth P.; Doyle, Peter G. (1985). "Non-sexist solution of the ménage problem". • Hassani, Mehdi. "Derangements and Applications". Journal of Integer Sequences (JIS), Volume 6, Issue 1, Article 03.1.2, 2003. • Weisstein, Eric W. "Derangement". MathWorld–A Wolfram Web Resource.
Wikipedia
Subfield of an algebra In algebra, a subfield of an algebra A over a field F is an F-subalgebra that is also a field. A maximal subfield is a subfield that is not contained in a strictly larger subfield of A. If A is a finite-dimensional central simple algebra, then a subfield E of A is called a strictly maximal subfield if $[E:F]=(\dim _{F}A)^{1/2}$. References • Richard S. Pierce. Associative algebras. Graduate texts in mathematics, Vol. 88, Springer-Verlag, 1982, ISBN 978-0-387-90693-5
Wikipedia
Subfunctor In category theory, a branch of mathematics, a subfunctor is a special type of functor that is an analogue of a subset. Definition Let C be a category, and let F be a contravariant functor from C to the category of sets Set. A contravariant functor G from C to Set is a subfunctor of F if 1. For all objects c of C, G(c) ⊆ F(c), and 2. For all arrows f: c′ → c of C, G(f) is the restriction of F(f) to G(c). This relation is often written as G ⊆ F. For example, let 1 be the category with a single object and a single arrow. A functor F: 1 → Set maps the unique object of 1 to some set S and the unique identity arrow of 1 to the identity function 1S on S. A subfunctor G of F maps the unique object of 1 to a subset T of S and maps the unique identity arrow to the identity function 1T on T. Notice that 1T is the restriction of 1S to T. Consequently, subfunctors of F correspond to subsets of S. Remarks Subfunctors in general are like global versions of subsets. For example, if one imagines the objects of some category C to be analogous to the open sets of a topological space, then a contravariant functor from C to the category of sets gives a set-valued presheaf on C, that is, it associates sets to the objects of C in a way that is compatible with the arrows of C. A subfunctor then associates a subset to each set, again in a compatible way. The most important examples of subfunctors are subfunctors of the Hom functor. Let c be an object of the category C, and consider the functor Hom(−, c). This functor takes an object c′ of C and gives back all of the morphisms c′ → c. A subfunctor of Hom(−, c) gives back only some of the morphisms. Such a subfunctor is called a sieve, and it is usually used when defining Grothendieck topologies. Open subfunctors Subfunctors are also used in the construction of representable functors on the category of ringed spaces. Let F be a contravariant functor from the category of ringed spaces to the category of sets, and let G ⊆ F. Suppose that this inclusion morphism G → F is representable by open immersions, i.e., for any representable functor Hom(−, X) and any morphism Hom(−, X) → F, the fibered product G×FHom(−, X) is a representable functor Hom(−, Y) and the morphism Y → X defined by the Yoneda lemma is an open immersion. Then G is called an open subfunctor of F. If F is covered by representable open subfunctors, then, under certain conditions, it can be shown that F is representable. This is a useful technique for the construction of ringed spaces. It was discovered and exploited heavily by Alexander Grothendieck, who applied it especially to the case of schemes. For a formal statement and proof, see Grothendieck, Éléments de géométrie algébrique, vol. 1, 2nd ed., chapter 0, section 4.5.
Wikipedia
Subgroup distortion In geometric group theory, a discipline of mathematics, subgroup distortion measures the extent to which an overgroup can reduce the complexity of a group's word problem.[1] Like much of geometric group theory, the concept is due to Misha Gromov, who introduced it in 1993.[2] Formally, let S generate group H, and let G be an overgroup for H generated by S ∪ T. Then each generating set defines a word metric on the corresponding group; the distortion of H in G is the asymptotic equivalence class of the function $R\mapsto {\frac {\operatorname {diam} _{H}(B_{G}(0,R)\cap H)}{\operatorname {diam} _{H}(B_{H}(0,R))}}{\text{,}}$ where BX(x, r) is the ball of radius r about center x in X and diam(S) is the diameter of S.[2]: 49  Subgroups with constant distortion are called quasiconvex.[3] Examples For example, consider the infinite cyclic group ℤ = ⟨b⟩, embedded as a normal subgroup of the Baumslag–Solitar group BS(1, 2) = ⟨a, b⟩. Then $b^{2^{n}}=a^{n}ba^{-n}$ is distance 2n from the origin in ℤ, but distance 2n + 1 from the origin in BS(1, 2). In particular, ℤ is at least exponentially distorted with base 2.[2][4] Similarly, the same infinite cyclic group, embedded in the free abelian group on two generators ℤ2, has linear distortion; the embedding in itself as 3ℤ only produces constant distortion.[2][4] Elementary properties In a tower of groups K ≤ H ≤ G, the distortion of K in G is at least the distortion of H in G. A normal abelian subgroup has distortion determined by the eigenvalues of the conjugation overgroup representation; formally, if g ∈ G acts on V ≤ G with eigenvalue λ, then V is at least exponentially distorted with base λ. For many non-normal but still abelian subgroups, the distortion of the normal core gives a strong lower bound.[1] Known values Every computable function with at most exponential growth can be a subgroup distortion,[5] but Lie subgroups of a nilpotent Lie group always have distortion n ↦ nr for some rational r.[6] The denominator in the definition is always 2R; for this reason, it is often omitted.[7][8] In that case, a subgroup that is not locally finite has superadditive distortion; conversely every superadditive function (up to asymptotic equivalence) can be found this way.[8] In cryptography The simplification in a word problem induced by subgroup distortion suffices to construct a cryptosystem, algorithms for encoding and decoding secret messages.[4] Formally, the plaintext message is any object (such as text, images, or numbers) that can be encoded as a number n. The transmitter then encodes n as an element g ∈ H with word length n. In a public overgroup G with that distorts H, the element g has a word of much smaller length, which is then transmitted to the receiver along with a number of "decoys" from G \ H, to obscure the secret subgroup H. The receiver then picks out the element of H, re-expresses the word in terms of generators of H, and recovers n.[4] References 1. Broaddus, Nathan; Farb, Benson; Putman, Andrew (2011). "Irreducible Sp-representations and subgroup distortion in the mapping class group". Commentarii Mathematici Helvetici. 86: 537–556. arXiv:0707.2262. doi:10.4171/CMH/233. S2CID 7665268. 2. Gromov, M. (1993). Asymptotic Invariants of Infinite Groups. London Mathematical Society lecture notes 182. Cambridge University Press. OCLC 842851469. 3. Minasyan, Ashot (2005). On quasiconvex subgroups of hyperbolic groups (PhD). Vanderbilt. p. 1. 4. Protocol I in Chatterji, Indira; Kahrobaei, Delaram; Ni Yen Lu (2017). "Cryptosystems Using Subgroup Distortion". Theoretical and Applied Informatics. 1. 29 (2): 14–24. arXiv:1610.07515. doi:10.20904/291-2014. S2CID 16899700. Although Protocol II in the same paper contains a fatal error, Scheme I is feasible; one such group/overgroup pairing is analyzed in Kahrobaei, Delaram; Keivan, Mallahi-Karai (2019). "Some applications of arithmetic groups in cryptography". Groups Complexity Cryptology. 11 (1): 25–33. arXiv:1803.11528. doi:10.1515/gcc-2019-2002. S2CID 119676551. An expository summary of both works is Werner, Nicolas (19 June 2021). Group distortion in Cryptography (PDF) (grado). Barcelona: Universitat de Barcelona. Retrieved 13 September 2022. 5. Olshanskii, A. Yu. (1997). "On subgroup distortion in finitely presented groups". Matematicheskii Sbornik. 188 (11): 51–98. Bibcode:1997SbMat.188.1617O. CiteSeerX 10.1.1.115.1717. doi:10.1070/SM1997v188n11ABEH000276. S2CID 250919942. 6. Osin, D. V. (2001). "Subgroup distortions in nilpotent groups". Communications in Algebra. 29 (12): 5439–5463. doi:10.1081/AGB-100107938. S2CID 122842195. 7. Farb, Benson (1994). "The extrinsic geometry of subgroups and the generalized word problem". Proc. London Math. Soc. 68 (3): 578. We should note that this notion of distortion differs from Gromov's definition (as defined in [18]) by a linear factor. 8. Davis, Tara C.; Olshanskii, Alexander Yu. (October 29, 2018). "Relative Subgroup Growth and Subgroup Distortion". arXiv:1212.5208v1 [math.GR].
Wikipedia
Lattice of subgroups In mathematics, the lattice of subgroups of a group $G$ is the lattice whose elements are the subgroups of $G$, with the partial order relation being set inclusion. In this lattice, the join of two subgroups is the subgroup generated by their union, and the meet of two subgroups is their intersection. Example The dihedral group Dih4 has ten subgroups, counting itself and the trivial subgroup. Five of the eight group elements generate subgroups of order two, and the other two non-identity elements both generate the same cyclic subgroup of order four. In addition, there are two subgroups of the form Z2 × Z2, generated by pairs of order-two elements. The lattice formed by these ten subgroups is shown in the illustration. This example also shows that the lattice of all subgroups of a group is not a modular lattice in general. Indeed, this particular lattice contains the forbidden "pentagon" N5 as a sublattice. Properties For any A, B, and C subgroups of a group with A ≤ C (A subgroup of C) then AB ∩ C = A(B ∩ C); the multiplication here is the product of subgroups. This property has been called the modular property of groups (Aschbacher 2000) or (Dedekind's) modular law (Robinson 1996, Cohn 2000). Since for two normal subgroups the product is actually the smallest subgroup containing the two, the normal subgroups form a modular lattice. The Lattice theorem establishes a Galois connection between the lattice of subgroups of a group and that of its quotients. The Zassenhaus lemma gives an isomorphism between certain combinations of quotients and products in the lattice of subgroups. In general, there is no restriction on the shape of the lattice of subgroups, in the sense that every lattice is isomorphic to a sublattice of the subgroup lattice of some group. Furthermore, every finite lattice is isomorphic to a sublattice of the subgroup lattice of some finite group (Schmidt 1994, p. 9). Characteristic lattices Subgroups with certain properties form lattices, but other properties do not. • Normal subgroups always form a modular lattice. In fact, the essential property that guarantees that the lattice is modular is that subgroups commute with each other, i.e. that they are quasinormal subgroups. • Nilpotent normal subgroups form a lattice, which is (part of) the content of Fitting's theorem. • In general, for any Fitting class F, both the subnormal F-subgroups and the normal F-subgroups form lattices. This includes the above with F the class of nilpotent groups, as well as other examples such as F the class of solvable groups. A class of groups is called a Fitting class if it is closed under isomorphism, subnormal subgroups, and products of subnormal subgroups. • Central subgroups form a lattice. However, neither finite subgroups nor torsion subgroups form a lattice: for instance, the free product $\mathbf {Z} /2\mathbf {Z} *\mathbf {Z} /2\mathbf {Z} $ is generated by two torsion elements, but is infinite and contains elements of infinite order. The fact that normal subgroups form a modular lattice is a particular case of a more general result, namely that in any Maltsev variety (of which groups are an example), the lattice of congruences is modular (Kearnes & Kiss 2013). Characterizing groups by their subgroup lattices Lattice theoretic information about the lattice of subgroups can sometimes be used to infer information about the original group, an idea that goes back to the work of Øystein Ore (1937, 1938). For instance, as Ore proved, a group is locally cyclic if and only if its lattice of subgroups is distributive. If additionally the lattice satisfies the ascending chain condition, then the group is cyclic. The groups whose lattice of subgroups is a complemented lattice are called complemented groups (Zacher 1953), and the groups whose lattice of subgroups are modular lattices are called Iwasawa groups or modular groups (Iwasawa 1941). Lattice-theoretic characterizations of this type also exist for solvable groups and perfect groups (Suzuki 1951). References • Aschbacher, M. (2000). Finite Group Theory. Cambridge University Press. p. 6. ISBN 978-0-521-78675-1. • Baer, Reinhold (1939). "The significance of the system of subgroups for the structure of the group". American Journal of Mathematics. The Johns Hopkins University Press. 61 (1): 1–44. doi:10.2307/2371383. JSTOR 2371383. • Cohn, Paul Moritz (2000). Classic algebra. Wiley. p. 248. ISBN 978-0-471-87731-8. • Iwasawa, Kenkiti (1941), "Über die endlichen Gruppen und die Verbände ihrer Untergruppen", J. Fac. Sci. Imp. Univ. Tokyo. Sect. I., 4: 171–199, MR 0005721 • Kearnes, Keith; Kiss, Emil W. (2013). The Shape of Congruence Lattices. American Mathematical Soc. p. 3. ISBN 978-0-8218-8323-5. • Ore, Øystein (1937). "Structures and group theory. I". Duke Mathematical Journal. 3 (2): 149–174. doi:10.1215/S0012-7094-37-00311-9. MR 1545977. • Ore, Øystein (1938). "Structures and group theory. II". Duke Mathematical Journal. 4 (2): 247–269. doi:10.1215/S0012-7094-38-00419-3. hdl:10338.dmlcz/100155. MR 1546048. • Robinson, Derek (1996). A Course in the Theory of Groups. Springer Science & Business Media. p. 15. ISBN 978-0-387-94461-6. • Rottlaender, Ada (1928). "Nachweis der Existenz nicht-isomorpher Gruppen von gleicher Situation der Untergruppen". Mathematische Zeitschrift. 28 (1): 641–653. doi:10.1007/BF01181188. S2CID 120596994. • Schmidt, Roland (1994). Subgroup Lattices of Groups. Expositions in Math. Vol. 14. Walter de Gruyter. ISBN 978-3-11-011213-9. Review by Ralph Freese in Bull. AMS 33 (4): 487–492. • Suzuki, Michio (1951). "On the lattice of subgroups of finite groups". Transactions of the American Mathematical Society. American Mathematical Society. 70 (2): 345–371. doi:10.2307/1990375. JSTOR 1990375. • Suzuki, Michio (1956). Structure of a Group and the Structure of its Lattice of Subgroups. Berlin: Springer Verlag. • Yakovlev, B. V. (1974). "Conditions under which a lattice is isomorphic to a lattice of subgroups of a group". Algebra and Logic. 13 (6): 400–412. doi:10.1007/BF01462952. S2CID 119943975. • Zacher, Giovanni (1953). "Caratterizzazione dei gruppi risolubili d'ordine finito complementati". Rendiconti del Seminario Matematico della Università di Padova. 22: 113–122. ISSN 0041-8994. MR 0057878. External links • PlanetMath entry on lattice of subgroups • Example: Lattice of subgroups of the symmetric group S4
Wikipedia
Subgroup method The subgroup method is an algorithm used in the mathematical field of group theory. It is used to find the word of an element. It doesn't always return the minimal word, but it can return optimal words based on the series of subgroups that is used. The code looks like this: function operate(element, generator) <returns generator operated on element> function subgroup(g) sequence := (set of subgroups that will be used, depending on the method.) word := [] for subgroup in sequence coset_representatives := [] <fill coset_representatives with coset representatives of (next subgroup)/subgroup> for operation in coset_representatives if operate(g, operation) is in the next subgroup then append operation onto word g = operate(g, operation) break return word
Wikipedia
Subgroup In group theory, a branch of mathematics, given a group G under a binary operation ∗, a subset H of G is called a subgroup of G if H also forms a group under the operation ∗. More precisely, H is a subgroup of G if the restriction of ∗ to H × H is a group operation on H. This is often denoted H ≤ G, read as "H is a subgroup of G". Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve The trivial subgroup of any group is the subgroup {e} consisting of just the identity element.[1] A proper subgroup of a group G is a subgroup H which is a proper subset of G (that is, H ≠ G). This is often represented notationally by H < G, read as "H is a proper subgroup of G". Some authors also exclude the trivial group from being proper (that is, H ≠ {e}).[2][3] If H is a subgroup of G, then G is sometimes called an overgroup of H. The same definitions apply more generally when G is an arbitrary semigroup, but this article will only deal with subgroups of groups. Subgroup tests Suppose that G is a group, and H is a subset of G. For now, assume that the group operation of G is written multiplicatively, denoted by juxtaposition. • Then H is a subgroup of G if and only if H is nonempty and closed under products and inverses. Closed under products means that for every a and b in H, the product ab is in H. Closed under inverses means that for every a in H, the inverse a−1 is in H. These two conditions can be combined into one, that for every a and b in H, the element ab−1 is in H, but it is more natural and usually just as easy to test the two closure conditions separately.[4] • When H is finite, the test can be simplified: H is a subgroup if and only if it is nonempty and closed under products. These conditions alone imply that every element a of H generates a finite cyclic subgroup of H, say of order n, and then the inverse of a is an−1.[4] If the group operation is instead denoted by addition, then closed under products should be replaced by closed under addition, which is the condition that for every a and b in H, the sum a+b is in H, and closed under inverses should be edited to say that for every a in H, the inverse −a is in H. Basic properties of subgroups • The identity of a subgroup is the identity of the group: if G is a group with identity eG, and H is a subgroup of G with identity eH, then eH = eG. • The inverse of an element in a subgroup is the inverse of the element in the group: if H is a subgroup of a group G, and a and b are elements of H such that ab = ba = eH, then ab = ba = eG. • If H is a subgroup of G, then the inclusion map H → G sending each element a of H to itself is a homomorphism. • The intersection of subgroups A and B of G is again a subgroup of G.[5] For example, the intersection of the x-axis and y-axis in R2 under addition is the trivial subgroup. More generally, the intersection of an arbitrary collection of subgroups of G is a subgroup of G. • The union of subgroups A and B is a subgroup if and only if A ⊆ B or B ⊆ A. A non-example: 2Z ∪ 3Z is not a subgroup of Z, because 2 and 3 are elements of this subset whose sum, 5, is not in the subset. Similarly, the union of the x-axis and the y-axis in R2 is not a subgroup of R2. • If S is a subset of G, then there exists a smallest subgroup containing S, namely the intersection of all of subgroups containing S; it is denoted by ⟨S⟩ and is called the subgroup generated by S. An element of G is in ⟨S⟩ if and only if it is a finite product of elements of S and their inverses, possibly repeated.[6] • Every element a of a group G generates a cyclic subgroup ⟨a⟩. If ⟨a⟩ is isomorphic to Z/nZ (the integers mod n) for some positive integer n, then n is the smallest positive integer for which an = e, and n is called the order of a. If ⟨a⟩ is isomorphic to Z, then a is said to have infinite order. • The subgroups of any given group form a complete lattice under inclusion, called the lattice of subgroups. (While the infimum here is the usual set-theoretic intersection, the supremum of a set of subgroups is the subgroup generated by the set-theoretic union of the subgroups, not the set-theoretic union itself.) If e is the identity of G, then the trivial group {e} is the minimum subgroup of G, while the maximum subgroup is the group G itself. Cosets and Lagrange's theorem Main articles: Coset and Lagrange's theorem (group theory) Given a subgroup H and some a in G, we define the left coset aH = {ah : h in H}. Because a is invertible, the map φ : H → aH given by φ(h) = ah is a bijection. Furthermore, every element of G is contained in precisely one left coset of H; the left cosets are the equivalence classes corresponding to the equivalence relation a1 ~ a2 if and only if a1−1a2 is in H. The number of left cosets of H is called the index of H in G and is denoted by [G : H]. Lagrange's theorem states that for a finite group G and a subgroup H, $[G:H]={|G| \over |H|}$ where |G| and |H| denote the orders of G and H, respectively. In particular, the order of every subgroup of G (and the order of every element of G) must be a divisor of |G|.[7][8] Right cosets are defined analogously: Ha = {ha : h in H}. They are also the equivalence classes for a suitable equivalence relation and their number is equal to [G : H]. If aH = Ha for every a in G, then H is said to be a normal subgroup. Every subgroup of index 2 is normal: the left cosets, and also the right cosets, are simply the subgroup and its complement. More generally, if p is the lowest prime dividing the order of a finite group G, then any subgroup of index p (if such exists) is normal. Example: Subgroups of Z8 Let G be the cyclic group Z8 whose elements are $G=\left\{0,4,2,6,1,5,3,7\right\}$ and whose group operation is addition modulo 8. Its Cayley table is + 0 4 2 6 1 5 3 7 0 04 26 1537 4 40 62 5173 2 2640 3751 6 6204 7315 1 15372640 5 51736204 3 37514062 7 73150426 This group has two nontrivial subgroups: ■ J = {0, 4} and ■ H = {0, 4, 2, 6} , where J is also a subgroup of H. The Cayley table for H is the top-left quadrant of the Cayley table for G; The Cayley table for J is the top-left quadrant of the Cayley table for H. The group G is cyclic, and so are its subgroups. In general, subgroups of cyclic groups are also cyclic.[9] Example: Subgroups of S4 Let S4 be the symmetric group on 4 elements. Below are all the subgroups of S4, listed according to the number of elements, in decreasing order. 24 elements The whole group S4 is a subgroup of S4, of order 24. Its Cayley table is All 30 subgroups Simplified Hasse diagrams of the lattice of subgroups of S4 12 elements 8 elements 6 elements 4 elements 3 elements 2 elements Each element s of order 2 in S4 generates a subgroup {1,s} of order 2. There are 9 such elements: the ${\binom {4}{2}}=6$ transpositions (2-cycles) and the three elements (12)(34), (13)(24), (14)(23). 1 element The trivial subgroup is the unique subgroup of order 1 in S4. Other examples • The even integers form a subgroup 2Z of the integer ring Z: the sum of two even integers is even, and the negative of an even integer is even. • An ideal in a ring $R$ is a subgroup of the additive group of $R$. • A linear subspace of a vector space is a subgroup of the additive group of vectors. • In an abelian group, the elements of finite order form a subgroup called the torsion subgroup. See also • Cartan subgroup • Fitting subgroup • Fixed-point subgroup • Fully normalized subgroup • Stable subgroup Notes 1. Gallian 2013, p. 61. 2. Hungerford 1974, p. 32. 3. Artin 2011, p. 43. 4. Kurzweil & Stellmacher 1998, p. 4. 5. Jacobson 2009, p. 41. 6. Ash 2002. 7. See a didactic proof in this video. 8. Dummit & Foote 2004, p. 90. 9. Gallian 2013, p. 81. References • Jacobson, Nathan (2009), Basic algebra, vol. 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1. • Hungerford, Thomas (1974), Algebra (1st ed.), Springer-Verlag, ISBN 9780387905181. • Artin, Michael (2011), Algebra (2nd ed.), Prentice Hall, ISBN 9780132413770. • Dummit, David S.; Foote, Richard M. (2004). Abstract algebra (3rd ed.). Hoboken, NJ: Wiley. ISBN 9780471452348. OCLC 248917264. • Gallian, Joseph A. (2013). Contemporary abstract algebra (8th ed.). Boston, MA: Brooks/Cole Cengage Learning. ISBN 978-1-133-59970-8. OCLC 807255720. • Kurzweil, Hans; Stellmacher, Bernd (1998). Theorie der endlichen Gruppen. Springer-Lehrbuch. doi:10.1007/978-3-642-58816-7. • Ash, Robert B. (2002). Abstract Algebra: The Basic Graduate Year. Department of Mathematics University of Illinois. Groups Basic notions • Subgroup • Normal subgroup • Commutator subgroup • Quotient group • Group homomorphism • (Semi-) direct product • direct sum Types of groups • Finite groups • Abelian groups • Cyclic groups • Infinite group • Simple groups • Solvable groups • Symmetry group • Space group • Point group • Wallpaper group • Trivial group Discrete groups Classification of finite simple groups Cyclic group Zn Alternating group An Sporadic groups Mathieu group M11..12,M22..24 Conway group Co1..3 Janko groups J1, J2, J3, J4 Fischer group F22..24 Baby monster group B Monster group M Other finite groups Symmetric group Sn Dihedral group Dn Rubik's Cube group Lie groups • General linear group GL(n) • Special linear group SL(n) • Orthogonal group O(n) • Special orthogonal group SO(n) • Unitary group U(n) • Special unitary group SU(n) • Symplectic group Sp(n) Exceptional Lie groups G2 F4 E6 E7 E8 • Circle group • Lorentz group • Poincaré group • Quaternion group Infinite dimensional groups • Conformal group • Diffeomorphism group • Loop group • Quantum group • O(∞) • SU(∞) • Sp(∞) • History • Applications • Abstract algebra
Wikipedia
Subgroups of cyclic groups In abstract algebra, every subgroup of a cyclic group is cyclic. Moreover, for a finite cyclic group of order n, every subgroup's order is a divisor of n, and there is exactly one subgroup for each divisor.[1][2] This result has been called the fundamental theorem of cyclic groups.[3][4] Finite cyclic groups For every finite group G of order n, the following statements are equivalent: • G is cyclic. • For every divisor d of n, G has at most one subgroup of order d. If either (and thus both) are true, it follows that there exists exactly one subgroup of order d, for any divisor of n. This statement is known by various names such as characterization by subgroups.[5][6][7] (See also cyclic group for some characterization.) There exist finite groups other than cyclic groups with the property that all proper subgroups are cyclic; the Klein group is an example. However, the Klein group has more than one subgroup of order 2, so it does not meet the conditions of the characterization. The infinite cyclic group The infinite cyclic group is isomorphic to the additive subgroup Z of the integers. There is one subgroup dZ for each integer d (consisting of the multiples of d), and with the exception of the trivial group (generated by d = 0) every such subgroup is itself an infinite cyclic group. Because the infinite cyclic group is a free group on one generator (and the trivial group is a free group on no generators), this result can be seen as a special case of the Nielsen–Schreier theorem that every subgroup of a free group is itself free.[8] The fundamental theorem for finite cyclic groups can be established from the same theorem for the infinite cyclic groups, by viewing each finite cyclic group as a quotient group of the infinite cyclic group.[8] Lattice of subgroups In both the finite and the infinite case, the lattice of subgroups of a cyclic group is isomorphic to the dual of a divisibility lattice. In the finite case, the lattice of subgroups of a cyclic group of order n is isomorphic to the dual of the lattice of divisors of n, with a subgroup of order n/d for each divisor d. The subgroup of order n/d is a subgroup of the subgroup of order n/e if and only if e is a divisor of d. The lattice of subgroups of the infinite cyclic group can be described in the same way, as the dual of the divisibility lattice of all positive integers. If the infinite cyclic group is represented as the additive group on the integers, then the subgroup generated by d is a subgroup of the subgroup generated by e if and only if e is a divisor of d.[8] Divisibility lattices are distributive lattices, and therefore so are the lattices of subgroups of cyclic groups. This provides another alternative characterization of the finite cyclic groups: they are exactly the finite groups whose lattices of subgroups are distributive. More generally, a finitely generated group is cyclic if and only if its lattice of subgroups is distributive and an arbitrary group is locally cyclic if and only its lattice of subgroups is distributive.[9] The additive group of the rational numbers provides an example of a group that is locally cyclic, and that has a distributive lattice of subgroups, but that is not itself cyclic. References 1. Hall, Marshall (1976), The Theory of Groups, American Mathematical Society, Theorem 3.1.1, pp. 35–36, ISBN 9780821819678 2. Vinberg, Ėrnest Borisovich (2003), A Course in Algebra, Graduate Studies in Mathematics, vol. 56, American Mathematical Society, Theorem 4.50, pp. 152–153, ISBN 9780821834138. 3. Joseph A. Gallian (2010), "Fundamental Theorem of Cyclic Groups", Contemporary Abstract Algebra, p. 77, ISBN 9780547165097 4. W. Keith Nicholson (1999), "Cyclic Groups and the Order of an Element", Introduction To Abstract Algebra, Theorem 9. Fundamental Theorem of Finite Cyclic Groups, ISBN 0471331090 5. Steven Roman (2011). Fundamentals of Group Theory: An Advanced Approach. Springer. p. 44. ISBN 978-0-8176-8300-9. 6. V. K. Balakrishnan (1994). Schaum's Outline of Combinatorics. McGraw-Hill Prof Med/Tech. p. 155. ISBN 978-0-07-003575-1. 7. Markus Stroppel (2006). Locally Compact Groups. European Mathematical Society. p. 64. ISBN 978-3-03719-016-6. 8. Aluffi, Paolo (2009), "6.4 Example: Subgroups of Cyclic Groups", Algebra, Chapter 0, Graduate Studies in Mathematics, vol. 104, American Mathematical Society, pp. 82–84, ISBN 9780821847817. 9. Ore, Øystein (1938), "Structures and group theory. II", Duke Mathematical Journal, 4 (2): 247–269, doi:10.1215/S0012-7094-38-00419-3, hdl:10338.dmlcz/100155, MR 1546048.
Wikipedia
Subhamiltonian graph In graph theory and graph drawing, a subhamiltonian graph is a subgraph of a planar Hamiltonian graph.[1][2] Definition A graph G is subhamiltonian if G is a subgraph of another graph aug(G) on the same vertex set, such that aug(G) is planar and contains a Hamiltonian cycle. For this to be true, G itself must be planar, and additionally it must be possible to add edges to G, preserving planarity, in order to create a cycle in the augmented graph that passes through each vertex exactly once. The graph aug(G) is called a Hamiltonian augmentation of G.[2] It would be equivalent to define G to be subhamiltonian if G is a subgraph of a Hamiltonian planar graph, without requiring this larger graph to have the same vertex set. That is, for this alternative definition, it should be possible to add both vertices and edges to G to create a Hamiltonian cycle. However, if a graph can be made Hamiltonian by the addition of vertices and edges it can also be made Hamiltonian by the addition of edges alone, so this extra freedom does not change the definition.[3] In a subhamiltonian graph, a subhamiltonian cycle is a cyclic sequence of vertices such that adding an edge between each consecutive pair of vertices in the sequence preserves the planarity of the graph. A graph is subhamiltonian if and only if it has a subhamiltonian cycle.[4] History and applications The class of subhamiltonian graphs (but not this name for them) was introduced by Bernhart & Kainen (1979), who proved that these are exactly the graphs with two-page book embeddings.[5] Subhamiltonian graphs and Hamiltonian augmentations have also been applied in graph drawing to problems involving embedding graphs onto universal point sets, simultaneous embedding of multiple graphs, and layered graph drawing.[2] Related graph classes Some classes of planar graphs are necessarily Hamiltonian, and therefore also subhamiltonian; these include the 4-connected planar graphs[6] and the Halin graphs.[7] Every planar graph with maximum degree at most four is subhamiltonian,[4] as is every planar graph with no separating triangles.[8] If the edges of an arbitrary planar graph are subdivided into paths of length two, the resulting subdivided graph is always subhamiltonian.[2] References 1. Heath, Lenwood S. (1987), "Embedding outerplanar graphs in small books", SIAM Journal on Algebraic and Discrete Methods, 8 (2): 198–218, doi:10.1137/0608018, MR 0881181. 2. Di Giacomo, Emilio; Liotta, Giuseppe (2010), "The Hamiltonian augmentation problem and its applications to graph drawing", WALCOM: Algorithms and Computation, 4th International Workshop, WALCOM 2010, Dhaka, Bangladesh, February 10-12, 2010, Proceedings, Lecture Notes in Computer Science, vol. 5942, Berlin: Springer, pp. 35–46, doi:10.1007/978-3-642-11440-3_4, MR 2749626. 3. For instance in a 2003 technical report "Book embeddings of graphs and a theorem of Whitney", Paul Kainen defines subhamiltonian graphs to be subgraphs of planar Hamiltonian graphs, without restriction on the vertex set of the augmentation, but writes that "in the definition of subhamiltonian graph, one can require that the extension only involve the inclusion of new edges." 4. Bekos, Michael A.; Gronemann, Martin; Raftopoulou, Chrysanthi N. (2014), "Two-page book embeddings of 4-planar graphs", STACS, arXiv:1401.0684, Bibcode:2014arXiv1401.0684B. 5. Bernhart, Frank R.; Kainen, Paul C. (1979), "The book thickness of a graph", Journal of Combinatorial Theory, Series B, 27 (3): 320–331, doi:10.1016/0095-8956(79)90021-2. 6. Nishizeki, Takao; Chiba, Norishige (2008), "Chapter 10. Hamiltonian Cycles", Planar Graphs: Theory and Algorithms, Dover Books on Mathematics, Courier Dover Publications, pp. 171–184, ISBN 9780486466712. 7. Cornuéjols, G.; Naddef, D.; Pulleyblank, W. R. (1983), "Halin graphs and the travelling salesman problem", Mathematical Programming, 26 (3): 287–294, doi:10.1007/BF02591867, S2CID 26278382. 8. Kainen, Paul C.; Overbay, Shannon (2007), "Extension of a theorem of Whitney", Applied Mathematics Letters, 20 (7): 835–837, doi:10.1016/j.aml.2006.08.019, MR 2314718.
Wikipedia
Subhash Khot Subhash Khot FRS (born 10 June 1978 in Ichalkaranji) is an Indian-American mathematician and theoretical computer scientist who is the Julius Silver Professor of Computer Science in the Courant Institute of Mathematical Sciences at New York University. Khot has contributed to the field of computational complexity, and is best known for his unique games conjecture.[1] Subhash Khot FRS Born (1978-06-10) 10 June 1978 Ichalkaranji, Maharashtra, India Alma materPrinceton University, IIT Bombay Known forUnique games conjecture AwardsWaterman Award (2010) Rolf Nevanlinna Prize (2014) MacArthur Fellow (2016) Fellow of the Royal Society (2017) Scientific career FieldsComputer Science InstitutionsGeorgia Tech Courant Institute of Mathematical Sciences University of Chicago Doctoral advisorSanjeev Arora Khot received the 2014 Rolf Nevanlinna Prize by the International Mathematical Union.Khot stood First in the highly difficult IIT Entrance Exam. He received the MacArthur Fellowship in 2016 [2] and was elected a Fellow of the Royal Society in 2017.[3] Education Khot obtained his bachelor's degree in computer science from the Indian Institute of Technology Bombay in 1999. He received his doctorate degree in computer science from Princeton University in 2003 under the supervision of Sanjeev Arora. His doctoral dissertation was titled "New Techniques for Probabilistically Checkable Proofs and Inapproximability Results."[4] Honours and awards Khot is a two time silver medallist representing India at the International Mathematical Olympiad (1994 and 1995).[5][6] He has been awarded the Microsoft Research New Faculty Fellowship Award (2005),[7] the Alan T. Waterman Award (2010), the Rolf Nevanlinna Prize for his work on the Unique Games Conjecture (2014), and the MacArthur Fellowship (2016).[8] He was elected a Fellow of the Royal Society in 2017.[9] References 1. Khot, Subhash (2002), "On the power of unique 2-prover 1-round games", Proceedings of the 17th Annual IEEE Conference on Computational Complexity, p. 25, CiteSeerX 10.1.1.133.5651, doi:10.1109/CCC.2002.1004334, ISBN 978-0-7695-1468-0, S2CID 32966635. 2. "Subhash Khot - MacArthur Foundation". 3. "Subhash Khot". Royal Society. Archived from the original on 23 May 2017. Retrieved 27 May 2017. 4. "ACM Doctoral Dissertation Award 2003". Archived from the original on 3 November 2014. Retrieved 13 September 2014. 5. Subhash Khot's results at International Mathematical Olympiad 6. Shirali, S.A. (2006), "The Sierpinski problem", Resonance, 11 (2): 78–87, doi:10.1007/BF02837277, S2CID 121269449 7. Microsoft Faculty Fellowship Recipients 2005 8. "MacArthur Fellows Program". Archived from the original on 2 April 2012. 9. "Subhash Khot". Royal Society. Archived from the original on 23 May 2017. Retrieved 27 May 2017. Nevanlinna Prize winners • Tarjan (1982) • Valiant (1986) • Razborov (1990) • Wigderson (1994) • Shor (1998) • Sudan (2002) • Kleinberg (2006) • Spielman (2010) • Khot (2014) • Daskalakis (2018) Fellows of the Royal Society elected in 2017 Fellows • Yves-Alain Barde • Tony Bell • Christopher Bishop • Neil Burgess • Keith Beven • Wendy Bickmore • Krishna Chatterjee • James Durrant • Warren East • Tim Elliott • Anne Ferguson-Smith • Jonathan Gregory • Mark Gross • Roy Harrison • Gabriele Hegerl • Eddie Holmes • Richard Houlston • Yvonne Jones • Subhash Khot • Julia King • Stafford Lightman • Yadvinder Malhi • Andrew McKenzie • Gerard Milburn • Anne Neville • Alison Noble • Andrew Orr-Ewing • David Owen • Lawrence Paulson • Josephine Pemberton • Sandu Popescu • Sarah Price • Anne Ridley • David Rubinsztein • Gavin Salam • Nigel Shadbolt • Angus Silver • Gordon Slade • Pete Smith • Nicola Spaldin • Jonathan Stoye • John Sutherland • J. Roy Taylor • Jenny Thomas • Patrick Vallance • Susanne von Caemmerer • Hugh Watkins • Roger Williams • Ken Wolfe • Andy Woods Honorary • David Neuberger Foreign • Max Cooper • Whitfield Diffie • Robert Grubbs • Hideo Hosono •  Marcia McNutt • Ginés Morata • Robert Ritchie • Thomas Südhof • David Tilman •  Susan Wessler Authority control International • ISNI • VIAF National • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH
Wikipedia
Sublime number In number theory, a sublime number is a positive integer which has a perfect number of positive factors (including itself), and whose positive factors add up to another perfect number.[1] The number 12, for example, is a sublime number. It has a perfect number of positive factors (6): 1, 2, 3, 4, 6, and 12, and the sum of these is again a perfect number: 1 + 2 + 3 + 4 + 6 + 12 = 28. There are only two known sublime numbers: 12 and (2126)(261 − 1)(231 − 1)(219 − 1)(27 − 1)(25 − 1)(23 − 1) (sequence A081357 in the OEIS).[2] The second of these has 76 decimal digits: 6,086,555,670,238,378,989,670,371,734,243,169,622,657,830,773,351,885,970,528,324,860,512,791,691,264. References 1. MathPages article, "Sublime Numbers". 2. Clifford A. Pickover, Wonders of Numbers, Adventures in Mathematics, Mind and Meaning New York: Oxford University Press (2003): 215 Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Golden triangle (mathematics) A golden triangle, also called a sublime triangle,[1] is an isosceles triangle in which the duplicated side is in the golden ratio $\varphi $ to the base side: ${a \over b}=\varphi ={1+{\sqrt {5}} \over 2}\approx 1.618~034~.$ Angles • The vertex angle is:[2] $\theta =2\arcsin {b \over 2a}=2\arcsin {1 \over 2\varphi }=2\arcsin {{{\sqrt {5}}-1} \over 4}={\pi \over 5}~{\text{rad}}=36^{\circ }.$ Hence the golden triangle is an acute (isosceles) triangle. • Since the angles of a triangle sum to $\pi $ radians, each of the base angles (CBX and CXB) is: $\beta ={{\pi -{\pi \over 5}} \over 2}~{\text{rad}}={2\pi \over 5}~{\text{rad}}=72^{\circ }.$[1] Note: $\beta =\arccos \left({\frac {{\sqrt {5}}-1}{4}}\right)\,{\text{rad}}={2\pi \over 5}~{\text{rad}}=72^{\circ }.$ • The golden triangle is uniquely identified as the only triangle to have its three angles in the ratio 1 : 2 : 2 (36°, 72°, 72°).[3] In other geometric figures • Golden triangles can be found in the spikes of regular pentagrams. • Golden triangles can also be found in a regular decagon, an equiangular and equilateral ten-sided polygon, by connecting any two adjacent vertices to the center. This is because: 180(10−2)/10 = 144° is the interior angle, and bisecting it through the vertex to the center: 144/2 = 72°.[1] • Also, golden triangles are found in the nets of several stellations of dodecahedrons and icosahedrons. Logarithmic spiral The golden triangle is used to form some points of a logarithmic spiral. By bisecting one of the base angles, a new point is created that in turn, makes another golden triangle.[4] The bisection process can be continued indefinitely, creating an infinite number of golden triangles. A logarithmic spiral can be drawn through the vertices. This spiral is also known as an equiangular spiral, a term coined by René Descartes. "If a straight line is drawn from the pole to any point on the curve, it cuts the curve at precisely the same angle," hence equiangular.[5] This spiral is different from the golden spiral: the golden spiral grows by a factor of the golden ratio in each quarter-turn, whereas the spiral through these golden triangles takes an angle of 108° to grow by the same factor.[6] Golden gnomon Closely related to the golden triangle is the golden gnomon, which is the isosceles triangle in which the ratio of the equal side lengths to the base length is the reciprocal ${\tfrac {1}{\varphi }}$ of the golden ratio $\varphi $. "The golden triangle has a ratio of base length to side length equal to the golden section φ, whereas the golden gnomon has the ratio of side length to base length equal to the golden section φ."[7] ${a' \over b'}={1 \over \varphi }={{{\sqrt {5}}-1} \over 2}\approx 0.618034.$ Angles (The distances AX and CX are both a′ = a = φ , and the distance AC is b′ = φ², as seen in the figure.) • The apex angle AXC is: $\theta '=2\arcsin {b' \over {2a'}}=2\arcsin {{\varphi ^{2}} \over {2\varphi }}=2\arcsin {{1+{\sqrt {5}}} \over 4}={3\pi \over 5}~{\text{rad}}=108^{\circ }.$ Hence the golden gnomon is an obtuse (isosceles) triangle. $($Note: $\theta '=\arccos \left({\frac {1-{\sqrt {5}}}{4}}\right)\,{\text{rad}}={3\pi \over 5}~{\text{rad}}=108^{\circ }.)$ • Since the angles of the triangle AXC sum to $\pi $ radians, each of the base angles CAX and ACX is: $\beta '=\theta ={\pi -{3\pi \over 5} \over 2}~{\text{rad}}={\pi \over 5}~{\text{rad}}=36^{\circ }.$ Note: $\beta '=\theta =\arccos \left({\frac {1+{\sqrt {5}}}{4}}\right)\,{\text{rad}}={\pi \over 5}~{\text{rad}}=36^{\circ }.$ • The golden gnomon is uniquely identified as a triangle having its three angles in the ratio 1 : 1 : 3 (36°, 36°, 108°). Its base angles are 36° each, which is the same as the apex of the golden triangle. Bisections • By bisecting one of its base angles, a golden triangle can be subdivided into a golden triangle and a golden gnomon. • By trisecting its apex angle, a golden gnomon can be subdivided into a golden triangle and a golden gnomon. • A golden gnomon and a golden triangle with their equal sides matching each other in length, are also referred to as the obtuse and acute Robinson triangles.[3] Tilings • A golden triangle and two golden gnomons tile a regular pentagon.[8] • These isosceles triangles can be used to produce Penrose tilings. Penrose tiles are made from kites and darts. A kite is made from two golden triangles, and a dart is made from two gnomons. See also • Golden rectangle • Golden rhombus • Kepler triangle • Kimberling's golden triangle • Lute of Pythagoras • Pentagram • Golden triangle (composition) References 1. Elam, Kimberly (2001). Geometry of Design. New York: Princeton Architectural Press. ISBN 1-56898-249-6. 2. Weisstein, Eric W. "Golden Triangle". mathworld.wolfram.com. Retrieved 2019-12-26. 3. Tilings Encyclopedia. 1970. Archived from the original on 2009-05-24. 4. Huntley, H.E. (1970). The Divine Proportion: A Study In Mathematical Beauty. New York: Dover Publications Inc. ISBN 0-486-22254-3. 5. Livio, Mario (2002). The Golden Ratio: The Story of Phi, The World's Most Astonishing Number. New York: Broadway Books. ISBN 0-7679-0815-5. 6. Loeb, Arthur L.; Varney, William (March 1992). "Does the golden spiral exist, and if not, where is its center?". In Hargittai, István; Pickover, Clifford A. (eds.). Spiral Symmetry. World Scientific. pp. 47–61. doi:10.1142/9789814343084_0002. 7. Loeb, Arthur (1992). Concepts and Images: Visual Mathematics. Boston: Birkhäuser Boston. p. 180. ISBN 0-8176-3620-X. 8. Weisstein, Eric W. "Golden Gnomon". mathworld.wolfram.com. Retrieved 2019-12-26. External links • Weisstein, Eric W. "Golden triangle". MathWorld. • Weisstein, Eric W. "Golden gnomon". MathWorld. • Robinson triangles at Tilings Encyclopedia • Golden triangle according to Euclid • The extraordinary reciprocity of golden triangles at Tartapelago by Giorgio Pietrocola Metallic means • Pisot number • Gold • Angle • Base • Fibonacci sequence • Kepler triangle • Rectangle • Rhombus • Section search • Spiral • Triangle • Silver • Pell number • Bronze • Copper • Nickel • etc...
Wikipedia
Sublinear function In linear algebra, a sublinear function (or functional as is more often used in functional analysis), also called a quasi-seminorm or a Banach functional, on a vector space $X$ is a real-valued function with only some of the properties of a seminorm. Unlike seminorms, a sublinear function does not have to be nonnegative-valued and also does not have to be absolutely homogeneous. Seminorms are themselves abstractions of the more well known notion of norms, where a seminorm has all the defining properties of a norm except that it is not required to map non-zero vectors to non-zero values. In functional analysis the name Banach functional is sometimes used, reflecting that they are most commonly used when applying a general formulation of the Hahn–Banach theorem. The notion of a sublinear function was introduced by Stefan Banach when he proved his version of the Hahn-Banach theorem.[1] There is also a different notion in computer science, described below, that also goes by the name "sublinear function." Definitions Let $X$ be a vector space over a field $\mathbb {K} ,$ where $\mathbb {K} $ is either the real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} .$ A real-valued function $p:X\to \mathbb {R} $ on $X$ is called a sublinear function (or a sublinear functional if $\mathbb {K} =\mathbb {R} $), and also sometimes called a quasi-seminorm or a Banach functional, if it has these two properties:[1] 1. Positive homogeneity/Nonnegative homogeneity:[2] $p(rx)=rp(x)$ for all real $r\geq 0$ and all $x\in X.$ • This condition holds if and only if $p(rx)\leq rp(x)$ for all positive real $r>0$ and all $x\in X.$ 2. Subadditivity/Triangle inequality:[2] $p(x+y)\leq p(x)+p(y)$ for all $x,y\in X.$ • This subadditivity condition requires $p$ to be real-valued. A function $p:X\to \mathbb {R} $ is called positive[3] or nonnegative if $p(x)\geq 0$ for all $x\in X,$ although some authors[4] define positive to instead mean that $p(x)\neq 0$ whenever $x\neq 0;$ these definitions are not equivalent. It is a symmetric function if $p(-x)=p(x)$ for all $x\in X.$ Every subadditive symmetric function is necessarily nonnegative.[proof 1] A sublinear function on a real vector space is symmetric if and only if it is a seminorm. A sublinear function on a real or complex vector space is a seminorm if and only if it is a balanced function or equivalently, if and only if $p(ux)\leq p(x)$ for every unit length scalar $u$ (satisfying $|u|=1$) and every $x\in X.$ The set of all sublinear functions on $X,$ denoted by $X^{\#},$ can be partially ordered by declaring $p\leq q$ if and only if $p(x)\leq q(x)$ for all $x\in X.$ A sublinear function is called minimal if it is a minimal element of $X^{\#}$ under this order. A sublinear function is minimal if and only if it is a real linear functional.[1] Examples and sufficient conditions Every norm, seminorm, and real linear functional is a sublinear function. The identity function $\mathbb {R} \to \mathbb {R} $ on $X:=\mathbb {R} $ is an example of a sublinear function (in fact, it is even a linear functional) that is neither positive nor a seminorm; the same is true of this map's negation $x\mapsto -x.$[5] More generally, for any real $a\leq b,$ the map ${\begin{alignedat}{4}S_{a,b}:\;&&\mathbb {R} &&\;\to \;&\mathbb {R} \\[0.3ex]&&x&&\;\mapsto \;&{\begin{cases}ax&{\text{ if }}x\leq 0\\bx&{\text{ if }}x\geq 0\\\end{cases}}\\\end{alignedat}}$ is a sublinear function on $X:=\mathbb {R} $ and moreover, every sublinear function $p:\mathbb {R} \to \mathbb {R} $ is of this form; specifically, if $a:=-p(-1)$ and $b:=p(1)$ then $a\leq b$ and $p=S_{a,b}.$ If $p$ and $q$ are sublinear functions on a real vector space $X$ then so is the map $x\mapsto \max\{p(x),q(x)\}.$ More generally, if ${\mathcal {P}}$ is any non-empty collection of sublinear functionals on a real vector space $X$ and if for all $x\in X,$ $q(x):=\sup\{p(x):p\in {\mathcal {P}}\},$ then $q$ is a sublinear functional on $X.$[5] A function $p:X\to \mathbb {R} $ is sublinear if and only if it is subadditive, convex, and satisfies $p(0)\leq 0.$ Properties Every sublinear function is a convex function: For $0\leq t\leq 1,$ ${\begin{alignedat}{3}p(tx+(1-t)y)&\leq p(tx)+p((1-t)y)&&\quad {\text{ subadditivity}}\\&=tp(x)+(1-t)p(y)&&\quad {\text{ nonnegative homogeneity}}\\\end{alignedat}}$ If $p:X\to \mathbb {R} $ is a sublinear function on a vector space $X$ then[proof 2][3] $p(0)~=~0~\leq ~p(x)+p(-x),$ for every $x\in X,$ which implies that at least one of $p(x)$ and $p(-x)$ must be nonnegative; that is, for every $x\in X,$[3] $0~\leq ~\max\{p(x),p(-x)\}.$ Moreover, when $p:X\to \mathbb {R} $ is a sublinear function on a real vector space then the map $q:X\to \mathbb {R} $ defined by $q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}$ is a seminorm.[3] Subadditivity of $p:X\to \mathbb {R} $ guarantees that for all vectors $x,y\in X,$[1][proof 3] $p(x)-p(y)~\leq ~p(x-y),$ $-p(x)~\leq ~p(-x),$ so if $p$ is also symmetric then the reverse triangle inequality will hold for all vectors $x,y\in X,$ $|p(x)-p(y)|~\leq ~p(x-y).$ Defining $\ker p~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~p^{-1}(0),$ then subadditivity also guarantees that for all $x\in X,$ the value of $p$ on the set $x+(\ker p\cap -\ker p)=\{x+k:p(k)=0=p(-k)\}$ is constant and equal to $p(x).$[proof 4] In particular, if $\ker p=p^{-1}(0)$ is a vector subspace of $X$ then $-\ker p=\ker p$ and the assignment $x+\ker p\mapsto p(x),$ which will be denoted by ${\hat {p}},$ is a well-defined real-valued sublinear function on the quotient space $X\,/\,\ker p$ that satisfies ${\hat {p}}^{-1}(0)=\ker p.$ If $p$ is a seminorm then ${\hat {p}}$ is just the usual canonical norm on the quotient space $X\,/\,\ker p.$ Pryce's sublinearity lemma[2] — Suppose $p:X\to \mathbb {R} $ is a sublinear functional on a vector space $X$ and that $K\subseteq X$ is a non-empty convex subset. If $x\in X$ is a vector and $a,c>0$ are positive real numbers such that $p(x)+ac~<~\inf _{k\in K}p(x+ak)$ then for every positive real $b>0$ there exists some $\mathbf {z} \in K$ such that $p(x+a\mathbf {z} )+bc~<~\inf _{k\in K}p(x+a\mathbf {z} +bk).$ Adding $bc$ to both sides of the hypothesis $ p(x)+ac\,<\,\inf _{}p(x+aK)$ (where $p(x+aK)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{p(x+ak):k\in K\}$) and combining that with the conclusion gives $p(x)+ac+bc~<~\inf _{}p(x+aK)+bc~\leq ~p(x+a\mathbf {z} )+bc~<~\inf _{}p(x+a\mathbf {z} +bK)$ which yields many more inequalities, including, for instance, $p(x)+ac+bc~<~p(x+a\mathbf {z} )+bc~<~p(x+a\mathbf {z} +b\mathbf {z} )$ in which an expression on one side of a strict inequality $\,<\,$ can be obtained from the other by replacing the symbol $c$ with $\mathbf {z} $ (or vice versa) and moving the closing parenthesis to the right (or left) of an adjacent summand (all other symbols remain fixed and unchanged). Associated seminorm If $p:X\to \mathbb {R} $ is a real-valued sublinear function on a real vector space $X$ (or if $X$ is complex, then when it is considered as a real vector space) then the map $q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}$ defines a seminorm on the real vector space $X$ called the seminorm associated with $p.$[3] A sublinear function $p$ on a real or complex vector space is a symmetric function if and only if $p=q$ where $q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\max\{p(x),p(-x)\}$ as before. More generally, if $p:X\to \mathbb {R} $ is a real-valued sublinear function on a (real or complex) vector space $X$ then $q(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup _{|u|=1}p(ux)~=~\sup\{p(ux):u{\text{ is a unit scalar }}\}$ will define a seminorm on $X$ if this supremum is always a real number (that is, never equal to $\infty $). Relation to linear functionals If $p$ is a sublinear function on a real vector space $X$ then the following are equivalent:[1] 1. $p$ is a linear functional. 2. for every $x\in X,$ $p(x)+p(-x)\leq 0.$ 3. for every $x\in X,$ $p(x)+p(-x)=0.$ 4. $p$ is a minimal sublinear function. If $p$ is a sublinear function on a real vector space $X$ then there exists a linear functional $f$ on $X$ such that $f\leq p.$[1] If $X$ is a real vector space, $f$ is a linear functional on $X,$ and $p$ is a positive sublinear function on $X,$ then $f\leq p$ on $X$ if and only if $f^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .$[1] Dominating a linear functional A real-valued function $f$ defined on a subset of a real or complex vector space $X$ is said to be dominated by a sublinear function $p$ if $f(x)\leq p(x)$ for every $x$ that belongs to the domain of $f.$ If $f:X\to \mathbb {R} $ is a real linear functional on $X$ then[6][1] $f$ is dominated by $p$ (that is, $f\leq p$) if and only if $-p(-x)\leq f(x)\leq p(x)\quad {\text{ for every }}x\in X.$ Moreover, if $p$ is a seminorm or some other symmetric map (which by definition means that $p(-x)=p(x)$ holds for all $x$) then $f\leq p$ if and only if $|f|\leq p.$ Theorem[1] — If $p:X\to \mathbb {R} $ be a sublinear function on a real vector space $X$ and if $z\in X$ then there exists a linear functional $f$ on $X$ that is dominated by $p$ (that is, $f\leq p$) and satisfies $f(z)=p(z).$ Moreover, if $X$ is a topological vector space and $p$ is continuous at the origin then $f$ is continuous. Continuity Theorem[7] — Suppose $f:X\to \mathbb {R} $ is a subadditive function (that is, $f(x+y)\leq f(x)+f(y)$ for all $x,y\in X$). Then $f$ is continuous at the origin if and only if $f$ is uniformly continuous on $X.$ If $f$ satisfies $f(0)=0$ then $f$ is continuous if and only if its absolute value $|f|:X\to [0,\infty )$ is continuous. If $f$ is non-negative then $f$ is continuous if and only if $\{x\in X:f(x)<1\}$ is open in $X.$ Suppose $X$ is a topological vector space (TVS) over the real or complex numbers and $p$ is a sublinear function on $X.$ Then the following are equivalent:[7] 1. $p$ is continuous; 2. $p$ is continuous at 0; 3. $p$ is uniformly continuous on $X$; and if $p$ is positive then this list may be extended to include: 1. $\{x\in X:p(x)<1\}$ is open in $X.$ If $X$ is a real TVS, $f$ is a linear functional on $X,$ and $p$ is a continuous sublinear function on $X,$ then $f\leq p$ on $X$ implies that $f$ is continuous.[7] Relation to Minkowski functions and open convex sets Theorem[7] — If $U$ is a convex open neighborhood of the origin in a topological vector space $X$ then the Minkowski functional of $U,$ $p_{U}:X\to [0,\infty ),$ is a continuous non-negative sublinear function on $X$ such that $U=\left\{x\in X:p_{U}(x)<1\right\};$ if in addition $U$ is a balanced set then $p_{U}$ is a seminorm on $X.$ Relation to open convex sets Theorem[7] — Suppose that $X$ is a topological vector space (not necessarily locally convex or Hausdorff) over the real or complex numbers. Then the open convex subsets of $X$ are exactly those that are of the form $z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\}$ for some $z\in X$ and some positive continuous sublinear function $p$ on $X.$ Proof Let $V$ be an open convex subset of $X.$ If $0\in V$ then let $z:=0$ and otherwise let $z\in V$ be arbitrary. Let $p:X\to [0,\infty )$ be the Minkowski functional of $V-z,$ which is a continuous sublinear function on $X$ since $V-z$ is convex, absorbing, and open ($p$ however is not necessarily a seminorm since $V$ was not assumed to be balanced). From $X=X-z,$ it follows that $z+\{x\in X:p(x)<1\}=\{x\in X:p(x-z)<1\}.$ It will be shown that $V=z+\{x\in X:p(x)<1\},$ which will complete the proof. One of the known properties of Minkowski functionals guarantees $ \{x\in X:p(x)<1\}=(0,1)(V-z),$ where $(0,1)(V-z)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\{tx:0<t<1,x\in V-z\}=V-z$ since $V-z$ is convex and contains the origin. Thus $V-z=\{x\in X:p(x)<1\},$ as desired. $\blacksquare $ Operators The concept can be extended to operators that are homogeneous and subadditive. This requires only that the codomain be, say, an ordered vector space to make sense of the conditions. Computer science definition In computer science, a function $f:\mathbb {Z} ^{+}\to \mathbb {R} $ is called sublinear if $\lim _{n\to \infty }{\frac {f(n)}{n}}=0,$ or $f(n)\in o(n)$ in asymptotic notation (notice the small $o$). Formally, $f(n)\in o(n)$ if and only if, for any given $c>0,$ there exists an $N$ such that $f(n)<cn$ for $n\geq N.$[8] That is, $f$ grows slower than any linear function. The two meanings should not be confused: while a Banach functional is convex, almost the opposite is true for functions of sublinear growth: every function $f(n)\in o(n)$ can be upper-bounded by a concave function of sublinear growth.[9] See also • Asymmetric norm – Generalization of the concept of a norm • Auxiliary normed space • Hahn-Banach theorem – Theorem on extension of bounded linear functionalsPages displaying short descriptions of redirect targets • Linear functional – Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets • Minkowski functional – Function made from a set • Norm (mathematics) – Length in a vector space • Seminorm – nonnegative-real-valued function on a real or complex vector space that satisfies the triangle inequality and is absolutely homogenousPages displaying wikidata descriptions as a fallback • Superadditivity Notes Proofs 1. Let $x\in X.$ The triangle inequality and symmetry imply $p(0)=p(x+(-x))\leq p(x)+p(-x)=p(x)+p(x)=2p(x).$ Substituting $0$ for $x$ and then subtracting $p(0)$ from both sides proves that $0\leq p(0).$ Thus $0\leq p(0)\leq 2p(x)$ which implies $0\leq p(x).$ $\blacksquare $ 2. If $x\in X$ and $r:=0$ then nonnegative homogeneity implies that $p(0)=p(rx)=rp(x)=0p(x)=0.$ Consequently, $0=p(0)=p(x+(-x))\leq p(x)+p(-x),$ which is only possible if $0\leq \max\{p(x),p(-x)\}.$ $\blacksquare $ 3. $p(x)=p(y+(x-y))\leq p(y)+p(x-y),$ which happens if and only if $p(x)-p(y)\leq p(x-y).$ $\blacksquare $ Substituting $y:=-x$ and gives $p(x)-p(-x)\leq p(x-(-x))=p(x+x)\leq p(x)+p(x),$ which implies $-p(-x)\leq p(x)$ (positive homogeneity is not needed; the triangle inequality suffices). $\blacksquare $ 4. Let $x\in X$ and $k\in p^{-1}(0)\cap (-p^{-1}(0)).$ It remains to show that $p(x+k)=p(x).$ The triangle inequality implies $p(x+k)\leq p(x)+p(k)=p(x)+0=p(x).$ Since $p(-k)=0,$ $p(x)=p(x)-p(-k)\leq p(x-(-k))=p(x+k),$ as desired. $\blacksquare $ References 1. Narici & Beckenstein 2011, pp. 177–220. 2. Schechter 1996, pp. 313–315. 3. Narici & Beckenstein 2011, pp. 120–121. 4. Kubrusly 2011, p. 200. 5. Narici & Beckenstein 2011, pp. 177–221. 6. Rudin 1991, pp. 56–62. 7. Narici & Beckenstein 2011, pp. 192–193. 8. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001) [1990]. "3.1". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 47–48. ISBN 0-262-03293-7.{{cite book}}: CS1 maint: multiple names: authors list (link) 9. Ceccherini-Silberstein, Tullio; Salvatori, Maura; Sava-Huss, Ecaterina (2017-06-29). Groups, graphs, and random walks. Cambridge. Lemma 5.17. ISBN 9781316604403. OCLC 948670194.{{cite book}}: CS1 maint: location missing publisher (link) Bibliography • Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser Basel. ISBN 978-0-8176-4998-2. OCLC 710154895. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons
Wikipedia
Matrix (mathematics) In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, ${\begin{bmatrix}1&9&-13\\20&5&-6\end{bmatrix}}$ is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "$2\times 3$ matrix", or a matrix of dimension $2\times 3$. Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps. Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices.[1] This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such. Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant. In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and numerical analysis. Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics. Definition A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication.[2] Most commonly, a matrix over a field F is a rectangular array of elements of F.[3][4] A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix: $\mathbf {A} ={\begin{bmatrix}-1.3&0.6\\20.4&5.5\\9.7&-6.2\end{bmatrix}}.$ The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. Size The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the numbers of rows and columns a matrix (in the usual sense) can have as long as they are positive integers. A matrix with m rows and n columns is called an m × n matrix, or m-by-n matrix, while m and n are called its dimensions. For example, the matrix A above is a 3 × 2 matrix. Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix.[5] A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix. Overview of a matrix size Name Size Example Description Row vector 1 × n${\begin{bmatrix}3&7&2\end{bmatrix}}$ A matrix with one row, sometimes used to represent a vector Column vector n × 1${\begin{bmatrix}4\\1\\8\end{bmatrix}}$ A matrix with one column, sometimes used to represent a vector Square matrix n × n${\begin{bmatrix}9&13&5\\1&11&7\\2&6&3\end{bmatrix}}$ A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing. Notation The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an $m\times n$ matrix $\mathbf {A} $ represented as $\mathbf {A} ={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}={\begin{pmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{pmatrix}}.$ This may be abbreviated by writing only a single generic term, possibly along with indices, as in $\mathbf {A} =\left(a_{ij}\right),\quad \left[a_{ij}\right],\quad {\text{or}}\quad \left(a_{ij}\right)_{1\leq i\leq m,\;1\leq j\leq n}$ or $\mathbf {A} =(a_{i,j})_{1\leq i,j\leq n}$ in the case that $n=m$. Matrices are usually symbolized using upper-case letters (such as A in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., a11, or a1,1), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in ${\underline {\underline {A}}}$. The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j or (i, j) entry of the matrix, and commonly denoted by ai,j or aij. Alternative notations for that entry are A[i,j] and Ai,j. For example, the (1, 3) entry of the following matrix A is 5 (also denoted a13, a1,3, A[1,3] or A1,3): $\mathbf {A} ={\begin{bmatrix}4&-7&\color {red}{5}&0\\-2&0&11&8\\19&1&-3&12\end{bmatrix}}$ Sometimes, the entries of a matrix can be defined by a formula such as ai,j = f(i, j). For example, each of the entries of the following matrix A is determined by the formula aij = i − j. $\mathbf {A} ={\begin{bmatrix}0&-1&-2&-3\\1&0&-1&-2\\2&1&0&-1\end{bmatrix}}$ In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as A = [i−j], or A = ((i−j)). If matrix size is m × n, the above-mentioned formula f(i, j) is valid for any i = 1, ..., m and any j = 1, ..., n. This can be either specified separately, or indicated using m × n as a subscript. For instance, the matrix A above is 3 × 4, and can be defined as A = [i − j] (i = 1, 2, 3; j = 1, ..., 4), or A = [i − j]3×4. Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an m-by-n matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an m-by-n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1.[6] This article follows the more common convention in mathematical writing where enumeration starts from 1. An asterisk is occasionally used to refer to whole rows or columns in a matrix. For example, ai,∗ refers to the ith row of A, and a∗,j refers to the jth column of A. The set of all m-by-n real matrices is often denoted ${\mathcal {M}}(m,n),$ or ${\mathcal {M}}_{m\times n}(\mathbb {R} ).$ The set of all m-by-n matrices over another field or over a ring R, is similarly denoted ${\mathcal {M}}(m,n,R),$ or ${\mathcal {M}}_{m\times n}(R).$ If m = n, that is, in the case of square matrices, one does not repeat the dimension: ${\mathcal {M}}(n,R),$ or ${\mathcal {M}}_{n}(R).$[7] Often, $M$ is used in place of ${\mathcal {M}}.$ Basic operations External video How to organize, add and multiply matrices - Bill Shillito, TED ED[8] There are a number of basic operations that can be applied to modify matrices, called matrix addition, scalar multiplication, transposition, matrix multiplication, row operations, and submatrix.[9] Addition, scalar multiplication, and transposition Main articles: Matrix addition, Scalar multiplication, and Transpose Operations performed on matrices Operation Definition Example Addition The sum A+B of two m-by-n matrices A and B is calculated entrywise: (A + B)i,j = Ai,j + Bi,j, where 1 ≤ i ≤ m and 1 ≤ j ≤ n. ${\begin{bmatrix}1&3&1\\1&0&0\end{bmatrix}}+{\begin{bmatrix}0&0&5\\7&5&0\end{bmatrix}}={\begin{bmatrix}1+0&3+0&1+5\\1+7&0+5&0+0\end{bmatrix}}={\begin{bmatrix}1&3&6\\8&5&0\end{bmatrix}}$ Scalar multiplication The product cA of a number c (also called a scalar in the parlance of abstract algebra) and a matrix A is computed by multiplying every entry of A by c: (cA)i,j = c · Ai,j. This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is sometimes used as a synonym for "inner product". $2\cdot {\begin{bmatrix}1&8&-3\\4&-2&5\end{bmatrix}}={\begin{bmatrix}2\cdot 1&2\cdot 8&2\cdot -3\\2\cdot 4&2\cdot -2&2\cdot 5\end{bmatrix}}={\begin{bmatrix}2&16&-6\\8&-4&10\end{bmatrix}}$ Transposition The transpose of an m-by-n matrix A is the n-by-m matrix AT (also denoted Atr or tA) formed by turning rows into columns and vice versa: (AT)i,j = Aj,i. ${\begin{bmatrix}1&2&3\\0&-6&7\end{bmatrix}}^{\mathrm {T} }={\begin{bmatrix}1&0\\2&-6\\3&7\end{bmatrix}}$ Familiar properties of numbers extend to these operations of matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: A + B = B + A.[10] The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A. Matrix multiplication Main article: Matrix multiplication Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:[11] $[\mathbf {AB} ]_{i,j}=a_{i,1}b_{1,j}+a_{i,2}b_{2,j}+\cdots +a_{i,n}b_{n,j}=\sum _{r=1}^{n}a_{i,r}b_{r,j},$ where 1 ≤ i ≤ m and 1 ≤ j ≤ p.[12] For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340: ${\begin{aligned}{\begin{bmatrix}{\underline {2}}&{\underline {3}}&{\underline {4}}\\1&0&0\\\end{bmatrix}}{\begin{bmatrix}0&{\underline {1000}}\\1&{\underline {100}}\\0&{\underline {10}}\\\end{bmatrix}}&={\begin{bmatrix}3&{\underline {2340}}\\0&1000\\\end{bmatrix}}.\end{aligned}}$ Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A + B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined.[13] The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and m ≠ k. Even if both products are defined, they generally need not be equal, that is: AB ≠ BA, In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors.[11] An example of two matrices not commuting with each other is: ${\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}={\begin{bmatrix}0&1\\0&3\\\end{bmatrix}},$ whereas ${\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}={\begin{bmatrix}3&4\\0&0\\\end{bmatrix}}.$ Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product.[14] They arise in solving matrix equations such as the Sylvester equation. Row operations Main article: Row operations There are three types of row operations: 1. row addition, that is adding a row to another. 2. row multiplication, that is multiplying all entries of a row by a non-zero constant; 3. row switching, that is interchanging two rows of a matrix; These operations are used in several ways, including solving linear equations and finding matrix inverses. Submatrix A submatrix of a matrix is obtained by deleting any collection of rows and/or columns.[15][16][17] For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2: $\mathbf {A} ={\begin{bmatrix}1&\color {red}{2}&3&4\\5&\color {red}{6}&7&8\\\color {red}{9}&\color {red}{10}&\color {red}{11}&\color {red}{12}\end{bmatrix}}\rightarrow {\begin{bmatrix}1&3&4\\5&7&8\end{bmatrix}}.$ The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.[17][18] A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain.[19][20] Other authors define a principal submatrix as one in which the first k rows and columns, for some number k, are the ones that remain;[21] this type of submatrix has also been called a leading principal submatrix.[22] Linear equations Main articles: Linear equation and System of linear equations Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (that is, n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation $\mathbf {Ax} =\mathbf {b} $ is equivalent to the system of linear equations[23] ${\begin{aligned}a_{1,1}x_{1}+a_{1,2}x_{2}+&\cdots +a_{1,n}x_{n}=b_{1}\\&\ \ \vdots \\a_{m,1}x_{1}+a_{m,2}x_{2}+&\cdots +a_{m,n}x_{n}=b_{m}\end{aligned}}$ Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately. If n = m and the equations are independent, then this can be done by writing $\mathbf {x} =\mathbf {A} ^{-1}\mathbf {b} $ where A−1 is the inverse matrix of A. If A has no inverse, solutions—if any—can be found using its generalized inverse. Linear transformations Main articles: Linear transformation and Transformation matrix Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn → Rm mapping each vector x in Rn to the (matrix) product Ax, which is a vector in Rm. Conversely, each linear transformation f: Rn → Rm arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f. For example, the 2×2 matrix $\mathbf {A} ={\begin{bmatrix}a&c\\b&d\end{bmatrix}}$ can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors ${\begin{bmatrix}0\\0\end{bmatrix}},{\begin{bmatrix}1\\0\end{bmatrix}},{\begin{bmatrix}1\\1\end{bmatrix}}$, and ${\begin{bmatrix}0\\1\end{bmatrix}}$ in turn. These vectors define the vertices of the unit square. The following table shows several 2×2 real matrices with the associated linear maps of R2. The blue original is mapped to the green grid and shapes. The origin (0,0) is marked with a black point. Horizontal shear with m = 1.25. Reflection through the vertical axis Squeeze mapping with r = 3/2 Scaling by a factor of 3/2 Rotation by π/6 = 30° ${\begin{bmatrix}1&1.25\\0&1\end{bmatrix}}$ ${\begin{bmatrix}-1&0\\0&1\end{bmatrix}}$ ${\begin{bmatrix}{\frac {3}{2}}&0\\0&{\frac {2}{3}}\end{bmatrix}}$ ${\begin{bmatrix}{\frac {3}{2}}&0\\0&{\frac {3}{2}}\end{bmatrix}}$ ${\begin{bmatrix}\cos \left({\frac {\pi }{6}}\right)&-\sin \left({\frac {\pi }{6}}\right)\\\sin \left({\frac {\pi }{6}}\right)&\cos \left({\frac {\pi }{6}}\right)\end{bmatrix}}$ Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps:[24] if a k-by-m matrix B represents another linear map g: Rm → Rk, then the composition g ∘ f is represented by BA since (g ∘ f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x. The last equality follows from the above-mentioned associativity of matrix multiplication. The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors.[25] Equivalently it is the dimension of the image of the linear map represented by A.[26] The rank–nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.[27] Square matrix Main article: Square matrix A square matrix is a matrix with the same number of rows and columns.[5] An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line that runs from the top left corner to the bottom right corner of the matrix. Main types NameExample with n = 3 Diagonal matrix${\begin{bmatrix}a_{11}&0&0\\0&a_{22}&0\\0&0&a_{33}\\\end{bmatrix}}$ Lower triangular matrix${\begin{bmatrix}a_{11}&0&0\\a_{21}&a_{22}&0\\a_{31}&a_{32}&a_{33}\\\end{bmatrix}}$ Upper triangular matrix${\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0&a_{22}&a_{23}\\0&0&a_{33}\\\end{bmatrix}}$ Diagonal and triangular matrix If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly if all entries of A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are zero, A is called a diagonal matrix. Identity matrix Main article: Identity matrix The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, for example, $\mathbf {I} _{1}={\begin{bmatrix}1\end{bmatrix}},\ \mathbf {I} _{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ \ldots ,\ \mathbf {I} _{n}={\begin{bmatrix}1&0&\cdots &0\\0&1&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1\end{bmatrix}}$ It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged: AIn = ImA = A for any m-by-n matrix A. A nonzero scalar multiple of an identity matrix is called a scalar matrix. If the matrix entries come from a field, the scalar matrices form a group, under matrix multiplication, that is isomorphic to the multiplicative group of nonzero elements of the field. Symmetric or skew-symmetric matrix A square matrix A that is equal to its transpose, that is, A = AT, is a symmetric matrix. If instead, A is equal to the negative of its transpose, that is, A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A∗ = A, where the star or asterisk denotes the conjugate transpose of the matrix, that is, the transpose of the complex conjugate of A. By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.[28] This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below. Invertible matrix and its inverse A square matrix A is called invertible or non-singular if there exists a matrix B such that AB = BA = In ,[29][30] where In is the n×n identity matrix with 1s on the main diagonal and 0s elsewhere. If B exists, it is unique and is called the inverse matrix of A, denoted A−1. Definite matrix Positive definite matrixIndefinite matrix ${\begin{bmatrix}{\frac {1}{4}}&0\\0&1\\\end{bmatrix}}$ ${\begin{bmatrix}{\frac {1}{4}}&0\\0&-{\frac {1}{4}}\end{bmatrix}}$ $Q(x,y)={\frac {1}{4}}x^{2}+y^{2}$ $Q(x,y)={\frac {1}{4}}x^{2}-{\frac {1}{4}}y^{2}$ Points such that $ Q(x,y)=1$ (Ellipse). Points such that $ Q(x,y)=1$ (Hyperbola). A symmetric real matrix A is called positive-definite if the associated quadratic form f (x) = xTA x has a positive value for every nonzero vector x in Rn. If f (x) only yields negative values then A is negative-definite; if f does produce both negative and positive values then A is indefinite.[31] If the quadratic form f yields only non-negative values (positive or zero), the symmetric matrix is called positive-semidefinite (or if only non-positive values, then negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite. A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, that is, the matrix is positive-semidefinite and it is invertible.[32] The table at the right shows two possibilities for 2-by-2 matrices. Allowing as input two different vectors instead yields the bilinear form associated to A:[33] BA (x, y) = xTAy. In the case of complex matrices, the same terminology and result apply, with symmetric matrix, quadratic form, bilinear form, and transpose xT replaced respectively by Hermitian matrix, Hermitian form, sesquilinear form, and conjugate transpose xH. Orthogonal matrix Main article: Orthogonal matrix An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that is, orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse: $\mathbf {A} ^{\mathrm {T} }=\mathbf {A} ^{-1},\,$ which entails $\mathbf {A} ^{\mathrm {T} }\mathbf {A} =\mathbf {A} \mathbf {A} ^{\mathrm {T} }=\mathbf {I} _{n},$ where In is the identity matrix of size n. An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation without reflection, i.e., the transformation preserves the orientation of the transformed structure, while every orthogonal matrix with determinant -1 reverses the orientation, i.e., is a composition of a pure reflection and a (possibly null) rotation. The identity matrices have determinant 1, and are pure rotations by an angle zero. The complex analogue of an orthogonal matrix is a unitary matrix. Trace The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors: $\operatorname {tr} (\mathbf {AB} )=\operatorname {tr} (\mathbf {BA} )$. This is immediate from the definition of matrix multiplication: $\operatorname {tr} (\mathbf {AB} )=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ji}=\operatorname {tr} (\mathbf {BA} ).$ It follows that the trace of the product of more than two matrices is independent of cyclic permutations of the matrices, however this does not in general apply for arbitrary permutations (for example, tr(ABC) ≠ tr(BAC), in general). Also, the trace of a matrix is equal to that of its transpose, that is, tr(A) = tr(AT). Determinant Main article: Determinant The determinant of a square matrix A (denoted det(A) or |A|) is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in R2) or volume (in R3) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. The determinant of 2-by-2 matrices is given by $\det {\begin{bmatrix}a&b\\c&d\end{bmatrix}}=ad-bc.$[34] The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises these two formulae to all dimensions.[35] The determinant of a product of square matrices equals the product of their determinants: det(AB) = det(A) · det(B), or using alternate notation: |AB| = |A| · |B|.[36] Adding a multiple of any row to another row, or a multiple of any column to another column does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.[37] Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices, the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, determinants of smaller matrices.[38] This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.[39] Eigenvalues and eigenvectors Main article: Eigenvalues and eigenvectors A number $ \lambda $ and a non-zero vector v satisfying $\mathbf {A} \mathbf {v} =\lambda \mathbf {v} $ are called an eigenvalue and an eigenvector of A, respectively.[40][41] The number λ is an eigenvalue of an n×n-matrix A if and only if A−λIn is not invertible, which is equivalent to $\det(\mathbf {A} -\lambda \mathbf {I} )=0.$[42] The polynomial pA in an indeterminate X given by evaluation of the determinant det(XIn−A) is called the characteristic polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions, that is, eigenvalues of the matrix.[43] They may be complex even if the entries of A are real. According to the Cayley–Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix. Computational aspects Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms or iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors xn converging to an eigenvector when n tends to infinity.[44] To choose the most appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra.[45] As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability. Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example, multiplication of matrices. Calculating the matrix product of two n-by-n matrices using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only n2.807 multiplications.[46] A refined approach also incorporates specific features of the computing devices. In many practical situations additional information about the matrices involved is known. An important case are sparse matrices, that is, matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.[47] An algorithm is, roughly speaking, numerically stable, if little deviations in the input values do not lead to big deviations in the result. For example, calculating the inverse of a matrix via Laplace expansion (adj(A) denotes the adjugate matrix of A) A−1 = adj(A) / det(A) may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix's inverse.[48] Most computer programming languages support arrays but are not designed with built-in commands for matrices. Instead, available external libraries provide matrix operations on arrays, in nearly all currently used programming languages. Matrix manipulation was among the earliest numerical applications of computers.[49] The original Dartmouth BASIC had built-in commands for matrix arithmetic on arrays from its second edition implementation in 1964. As early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices.[50] Decomposition Main articles: Matrix decomposition, Matrix diagonalization, Gaussian elimination, and Bareiss algorithm There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix decomposition or matrix factorization techniques. The interest of all these techniques is that they preserve certain properties of the matrices in question, such as determinant, rank, or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices. The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U).[51] Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian elimination is a similar algorithm; it transforms any matrix to row echelon form.[52] Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV∗, where U and V are unitary matrices and D is a diagonal matrix. The eigendecomposition or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is a suitable invertible matrix.[53] If A can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λn of A, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right.[54] Given the eigendecomposition, the nth power of A (that is, n-fold iterated matrix multiplication) can be calculated via An = (VDV−1)n = VDV−1VDV−1...VDV−1 = VDnV−1 and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for A instead. This can be used to compute the matrix exponential eA, a need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices.[55] To avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed.[56] Abstract algebraic aspects and generalizations Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension is tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realized as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers.[57] Matrices, subject to certain requirements tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as matrix rings. Though the product of matrices is not in general commutative yet certain matrices form fields known as matrix fields. Matrices with more general entries This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, that is, a set where addition, subtraction, multiplication, and division operations are defined and well-behaved, may be used instead of R or C, for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a larger field than that of the entries of the matrix; for instance, they may be complex in the case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as C, from the outset. More generally, matrices with entries in a ring R are widely used in mathematics.[58] Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(n, R) (also denoted Mn(R)[7]) of all square n-by-n matrices over R is a ring called matrix ring, isomorphic to the endomorphism ring of the left R-module Rn.[59] If the ring R is commutative, that is, its multiplication is commutative, then the ring M(n, R) is also an associative algebra over R. The determinant of square matrices over a commutative ring R can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in R, generalising the situation over a field F, where every nonzero element is invertible.[60] Matrices over superrings are called supermatrices.[61] Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be square matrices, and thus need not be members of any ring; but their sizes must fulfill certain compatibility conditions. Relationship to linear maps Linear maps Rn → Rm are equivalent to m-by-n matrices, as described above. More generally, any linear map f: V → W between finite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that $f(\mathbf {v} _{j})=\sum _{i=1}^{m}a_{i,j}\mathbf {w} _{i}\qquad {\mbox{for}}\ j=1,\ldots ,n.$ In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely determines the entries of the matrix A. The matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices.[62] Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, with respect to the dual bases.[63] These properties can be restated more naturally: the category of all matrices with entries in a field $k$ with multiplication as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field. More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to the matrix ring of n×n matrices representing the endomorphism ring of Rn. Matrix groups Main article: Matrix group A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an operation combining any two objects to a third, subject to certain requirements.[64] A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group.[65][66] Since a group every element must be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups. Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller group contained in) their general linear group, called a special linear group.[67] Orthogonal matrices, determined by the condition MTM = I, form the orthogonal group.[68] Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determinant 1 form a subgroup called special orthogonal group. Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group.[69] General groups can be studied using matrix groups, which are comparatively well understood, by means of representation theory.[70] Infinite matrices It is also possible to consider matrices with infinitely many rows and/or columns[71] even if, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication, and transposition can still be defined without problem; however, matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general. If R is any ring with unity, then the ring of endomorphisms of $M=\bigoplus _{i\in I}R$ as a right R module is isomorphic to the ring of column finite matrices $\mathrm {CFM} _{I}(R)$ whose entries are indexed by $I\times I$, and whose columns each contain only finitely many nonzero entries. The endomorphisms of M considered as a left R module result in an analogous object, the row finite matrices $\mathrm {RFM} _{I}(R)$ whose rows each only have finitely many nonzero entries. If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix A to describe a linear map f: V→W, bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) linear combination of basis vectors, so that written as a (column) vector v of coefficients, only finitely many entries vi are nonzero. Now the columns of A describe the images by f of individual basis vectors of V in the basis of W, which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of A however: in the product A·v there are only finitely many nonzero coefficients of v involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover, this amounts to forming a linear combination of the columns of A that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries because each of those columns does. Products of two matrices of the given type are well defined (provided that the column-index and row-index sets match), are of the same type, and correspond to the composition of linear maps. If R is a normed ring, then the condition of row or column finiteness can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously, the matrices whose row sums are absolutely convergent series also form a ring. Infinite matrices can also be used to describe operators on Hilbert spaces, where convergence and continuity questions arise, which again results in certain constraints that must be imposed. However, the explicit point of view of matrices tends to obfuscate the matter,[72] and the abstract and more powerful tools of functional analysis can be used instead. Empty matrix An empty matrix is a matrix in which the number of rows or columns (or both) is zero.[73][74] Empty matrices help dealing with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite-dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants. Applications There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of alternatives the players choose.[75] Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.[76] Complex numbers can be represented by particular real 2-by-2 matrices via $a+ib\leftrightarrow {\begin{bmatrix}a&-b\\b&a\end{bmatrix}},$ under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions[77] and Clifford algebras in general. Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break.[78] Computer graphics uses matrices to represent objects; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image convolutions such as sharpening, blurring, edge detection, and more.[79] Matrices over a polynomial ring are important in the study of control theory. Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method. Graph theory The adjacency matrix of a finite graph is a basic notion of graph theory.[80] It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example "yes" and "no", respectively) are called logical matrices. The distance (or cost) matrix contains information about distances of the edges.[81] These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory. Analysis and geometry The Hessian matrix of a differentiable function ƒ: Rn → R consists of the second derivatives of ƒ with respect to the several coordinate directions, that is,[82] $H(f)=\left[{\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}\right].$ It encodes information about the local growth behaviour of the function: given a critical point x = (x1, ..., xn), that is, a point where the first partial derivatives $\partial f/\partial x_{i}$ of ƒ vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).[83] Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map f: Rn → Rm. If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as[84] $J(f)=\left[{\frac {\partial f_{i}}{\partial x_{j}}}\right]_{1\leq i\leq m,1\leq j\leq n}.$ If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.[85] Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question.[86] The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation.[87] Probability theory and statistics Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states.[88] A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain-like absorbing states, that is, states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.[89] Statistics also makes use of matrices in many different forms.[90] Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables.[91] Another technique using matrices are linear least squares, a method that approximates a finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function yi ≈ axi + b, i = 1, ..., N which can be formulated in terms of matrices, related to the singular value decomposition of matrices.[92] Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.[93][94] Symmetries and transformations in physics Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.[95] For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.[96] Linear combinations of quantum states The first model of quantum mechanics (Heisenberg, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states.[97] This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates.[98] Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.[99] Normal modes A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms.[100] They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.[101] Geometrical optics Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.[102] Electronics Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described with a matrix. The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component's input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector with the component's output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one impedance element (h12), one admittance element (h21), and two dimensionless elements (h11 and h22). Calculating a circuit now reduces to multiplying matrices. History Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations,[103] including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna.[104] The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.[105] The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659).[106] Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays.[104] Cramer presented his rule in 1750. The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list", "register", derived from mater—mother[107]) was coined by James Joseph Sylvester in 1850,[108] who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:[109] I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent. Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition.[104] Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his A memoir on the theory of matrices[110][111] in which he proposed and demonstrated the Cayley–Hamilton theorem.[104] The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation A = [ai,j] to represent a matrix where ai,j refers to the ith row and the jth column.[104] The modern study of determinants sprang from several sources.[112] Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as x2 + xy − 2y2, and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by ajk in the polynomial $a_{1}a_{2}\cdots a_{n}\prod _{i<j}(a_{j}-a_{i})\;$, where Π denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real.[113] Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above; Kronecker's Vorlesungen über die Theorie der Determinanten[114] and Weierstrass' Zur Determinantentheorie,[115] both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established. Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra,[116] partially due to their use in classification of the hypercomplex number systems of the previous century. The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns.[117] Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions. Other historical usages of the word "matrix" in mathematics The word has been used in unusual ways by at least two authors of historical importance. Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" (0 order) the function is identical to its extension:[118] Let us give the name of matrix to any function, of however many variables, that does not involve any apparent variables. Then, any possible function other than a matrix derives from a matrix by means of generalization, that is, by considering the proposition that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined. For example, a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, for example, y, by "considering" the function for all possible values of "individuals" ai substituted in place of variable x. And then the resulting collection of functions of the single variable y, that is, ∀ai: Φ(ai, y), can be reduced to a "matrix" of values by "considering" the function for all possible values of "individuals" bi substituted in place of variable y: ∀bj∀ai: Φ(ai, bj). Alfred Tarski in his 1946 Introduction to Logic used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.[119] See also • List of named matrices • Algebraic multiplicity – Multiplicity of an eigenvalue as a root of the characteristic polynomial • Geometric multiplicity – Dimension of the eigenspace associated with an eigenvalue • Gram–Schmidt process – Orthonormalization of a set of vectors • Irregular matrix • Matrix calculus – Specialized notation for multivariable calculus • Matrix function – Function that maps matrices to matricesPages displaying short descriptions of redirect targets • Matrix multiplication algorithm • Tensor — A generalization of matrices with any number of indices Notes 1. However, in the case of adjacency matrices, matrix multiplication or a variant of it allows the simultaneous computation of the number of paths between any two vertices, and of the shortest length of a path between two vertices. 2. Lang 2002 3. Fraleigh (1976, p. 209) 4. Nering (1970, p. 37) 5. Weisstein, Eric W. "Matrix". mathworld.wolfram.com. Retrieved 2020-08-19. 6. Oualline 2003, Ch. 5 7. Pop; Furdui (2017). Square Matrices of Order 2. Springer International Publishing. ISBN 978-3-319-54938-5. 8. "How to organize, add and multiply matrices - Bill Shillito". TED ED. Retrieved April 6, 2013. 9. Brown 1991, Definition I.2.1 (addition), Definition I.2.4 (scalar multiplication), and Definition I.2.33 (transpose) 10. Brown 1991, Theorem I.2.6 11. "How to Multiply Matrices". www.mathsisfun.com. Retrieved 2020-08-19. 12. Brown 1991, Definition I.2.20 13. Brown 1991, Theorem I.2.24 14. Horn & Johnson 1985, Ch. 4 and 5 15. Bronson (1970, p. 16) 16. Kreyszig (1972, p. 220) 17. Protter & Morrey (1970, p. 869) 18. Kreyszig (1972, pp. 241, 244) 19. Schneider, Hans; Barker, George Phillip (2012), Matrices and Linear Algebra, Dover Books on Mathematics, Courier Dover Corporation, p. 251, ISBN 978-0-486-13930-2. 20. Perlis, Sam (1991), Theory of Matrices, Dover books on advanced mathematics, Courier Dover Corporation, p. 103, ISBN 978-0-486-66810-9. 21. Anton, Howard (2010), Elementary Linear Algebra (10th ed.), John Wiley & Sons, p. 414, ISBN 978-0-470-45821-1. 22. Horn, Roger A.; Johnson, Charles R. (2012), Matrix Analysis (2nd ed.), Cambridge University Press, p. 17, ISBN 978-0-521-83940-2. 23. Brown 1991, I.2.21 and 22 24. Greub 1975, Section III.2 25. Brown 1991, Definition II.3.3 26. Greub 1975, Section III.1 27. Brown 1991, Theorem II.3.22 28. Horn & Johnson 1985, Theorem 2.5.6 29. Brown 1991, Definition I.2.28 30. Brown 1991, Definition I.5.13 31. Horn & Johnson 1985, Chapter 7 32. Horn & Johnson 1985, Theorem 7.2.1 33. Horn & Johnson 1985, Example 4.0.6, p. 169 34. "Matrix | mathematics". Encyclopedia Britannica. Retrieved 2020-08-19. 35. Brown 1991, Definition III.2.1 36. Brown 1991, Theorem III.2.12 37. Brown 1991, Corollary III.2.16 38. Mirsky 1990, Theorem 1.4.1 39. Brown 1991, Theorem III.3.18 40. Eigen means "own" in German and in Dutch. 41. Brown 1991, Definition III.4.1 42. Brown 1991, Definition III.4.9 43. Brown 1991, Corollary III.4.10 44. Householder 1975, Ch. 7 45. Bau III & Trefethen 1997 46. Golub & Van Loan 1996, Algorithm 1.3.1 47. Golub & Van Loan 1996, Chapters 9 and 10, esp. section 10.2 48. Golub & Van Loan 1996, Chapter 2.3 49. Grcar, Joseph F. (2011-01-01). "John von Neumann's Analysis of Gaussian Elimination and the Origins of Modern Numerical Analysis". SIAM Review. 53 (4): 607–682. doi:10.1137/080734716. ISSN 0036-1445. 50. For example, Mathematica, see Wolfram 2003, Ch. 3.7 51. Press, Flannery & Teukolsky 1992 52. Stoer & Bulirsch 2002, Section 4.1 53. Horn & Johnson 1985, Theorem 2.5.4 54. Horn & Johnson 1985, Ch. 3.1, 3.2 55. Arnold & Cooke 1992, Sections 14.5, 7, 8 56. Bronson 1989, Ch. 15 57. Coburn 1955, Ch. V 58. Lang 2002, Chapter XIII 59. Lang 2002, XVII.1, p. 643 60. Lang 2002, Proposition XIII.4.16 61. Reichl 2004, Section L.2 62. Greub 1975, Section III.3 63. Greub 1975, Section III.3.13 64. See any standard reference in a group. 65. Additionally, the group must be closed in the general linear group. 66. Baker 2003, Def. 1.30 67. Baker 2003, Theorem 1.2 68. Artin 1991, Chapter 4.5 69. Rowen 2008, Example 19.2, p. 198 70. See any reference in representation theory or group representation. 71. See the item "Matrix" in Itõ, ed. 1987 72. "Not much of matrix theory carries over to infinite-dimensional spaces, and what does is not so useful, but it sometimes helps." Halmos 1982, p. 23, Chapter 5 73. "Empty Matrix: A matrix is empty if either its row or column dimension is zero", Glossary Archived 2009-04-29 at the Wayback Machine, O-Matrix v6 User Guide 74. "A matrix having at least one dimension equal to zero is called an empty matrix", MATLAB Data Structures Archived 2009-12-28 at the Wayback Machine 75. Fudenberg & Tirole 1983, Section 1.1.1 76. Manning 1999, Section 15.3.4 77. Ward 1997, Ch. 2.8 78. Stinson 2005, Ch. 1.1.5 and 1.2.4 79. Association for Computing Machinery 1979, Ch. 7 80. Godsil & Royle 2004, Ch. 8.1 81. Punnen 2002 82. Lang 1987a, Ch. XVI.6 83. Nocedal 2006, Ch. 16 84. Lang 1987a, Ch. XVI.1 85. Lang 1987a, Ch. XVI.5. For a more advanced, and more general statement see Lang 1969, Ch. VI.2 86. Gilbarg & Trudinger 2001 87. Šolin 2005, Ch. 2.5. See also stiffness method. 88. Latouche & Ramaswami 1999 89. Mehata & Srinivasan 1978, Ch. 2.8 90. Healy, Michael (1986), Matrices for Statistics, Oxford University Press, ISBN 978-0-19-850702-4 91. Krzanowski 1988, Ch. 2.2., p. 60 92. Krzanowski 1988, Ch. 4.1 93. Conrey 2007 94. Zabrodin, Brezin & Kazakov et al. 2006 95. Itzykson & Zuber 1980, Ch. 2 96. see Burgess & Moore 2007, section 1.6.3. (SU(3)), section 2.4.3.2. (Kobayashi–Maskawa matrix) 97. Schiff 1968, Ch. 6 98. Bohm 2001, sections II.4 and II.8 99. Weinberg 1995, Ch. 3 100. Wherrett 1987, part II 101. Riley, Hobson & Bence 1997, 7.17 102. Guenther 1990, Ch. 5 103. Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1 104. Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0-321-07912-1, p. 564-565 105. Needham, Joseph; Wang Ling (1959). Science and Civilisation in China. Vol. III. Cambridge: Cambridge University Press. p. 117. ISBN 978-0-521-05801-8. 106. Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0-321-07912-1, p. 564 107. Merriam-Webster dictionary, Merriam-Webster, retrieved April 20, 2009 108. Although many sources state that J. J. Sylvester coined the mathematical term "matrix" in 1848, Sylvester published nothing in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed., The Collected Mathematical Papers of James Joseph Sylvester (Cambridge, England: Cambridge University Press, 1904), vol. 1.) His earliest use of the term "matrix" occurs in 1850 in J. J. Sylvester (1850) "Additions to the articles in the September number of this journal, "On a new class of theorems," and on Pascal's theorem," The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 37: 363-370. From page 369: "For this purpose, we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This does not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants ... " 109. The Collected Mathematical Papers of James Joseph Sylvester: 1837–1853, Paper 37, p. 247 110. Phil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475-496 111. Dieudonné, ed. 1978, Vol. 1, Ch. III, p. 96 112. Knobloch 1994 113. Hawkins 1975 114. Kronecker 1897 115. Weierstrass 1915, pp. 271–286 116. Bôcher 2004 117. Mehra & Rechenberg 1987 118. Whitehead, Alfred North; and Russell, Bertrand (1913) Principia Mathematica to *56, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff. 119. Tarski, Alfred; (1946) Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York NY, ISBN 0-486-28462-X. References • Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 • Arnold, Vladimir I.; Cooke, Roger (1992), Ordinary differential equations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-54813-3 • Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1 • Association for Computing Machinery (1979), Computer Graphics, Tata McGraw–Hill, ISBN 978-0-07-059376-3 • Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-85233-470-3 • Bau III, David; Trefethen, Lloyd N. (1997), Numerical linear algebra, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-361-9 • Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X • Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall • Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 • Bronson, Richard (1989), Schaum's outline of theory and problems of matrix operations, New York: McGraw–Hill, ISBN 978-0-07-007978-6 • Brown, William C. (1991), Matrices and vector spaces, New York, NY: Marcel Dekker, ISBN 978-0-8247-8419-5 • Coburn, Nathaniel (1955), Vector and tensor analysis, New York, NY: Macmillan, OCLC 1029828 • Conrey, J. Brian (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press, ISBN 978-0-521-69964-8 • Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 • Fudenberg, Drew; Tirole, Jean (1983), Game Theory, MIT Press • Gilbarg, David; Trudinger, Neil S. (2001), Elliptic partial differential equations of second order (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4 • Godsil, Chris; Royle, Gordon (2004), Algebraic Graph Theory, Graduate Texts in Mathematics, vol. 207, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8 • Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9 • Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90110-7 • Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics, vol. 19 (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 0675952 • Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 • Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York, NY: Dover Publications, MR 0378371 • Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8. • Krzanowski, Wojtek J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series, vol. 3, The Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 0969370 • Itô, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN 978-0-262-09026-1, MR 0901762 • Lang, Serge (1969), Analysis II, Addison-Wesley • Lang, Serge (1987a), Calculus of several variables (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96405-8 • Lang, Serge (1987b), Linear algebra, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96412-6 • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 • Latouche, Guy; Ramaswami, Vaidyanathan (1999), Introduction to matrix analytic methods in stochastic modeling (1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8 • Manning, Christopher D.; Schütze, Hinrich (1999), Foundations of statistical natural language processing, MIT Press, ISBN 978-0-262-13360-9 • Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York, NY: McGraw–Hill, ISBN 978-0-07-096612-3 • Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-66434-7 • Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76-91646 • Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, p. 449, ISBN 978-0-387-30303-1 • Oualline, Steve (2003), Practical C++ programming, O'Reilly, ISBN 978-0-596-00419-4 • Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), "LU Decomposition and Its Applications" (PDF), Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.), Cambridge University Press, pp. 34–42, archived from the original on 2009-09-06{{citation}}: CS1 maint: unfit URL (link) • Protter, Murray H.; Morrey, Charles B. Jr. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, LCCN 76087042 • Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston, MA: Kluwer Academic Publishers, ISBN 978-1-4020-0664-7 • Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0 • Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4153-2 • Šolin, Pavel (2005), Partial Differential Equations and the Finite Element Method, Wiley-Interscience, ISBN 978-0-471-76409-0 • Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC, ISBN 978-1-58488-508-5 • Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95452-3 • Ward, J. P. (1997), Quaternions and Cayley numbers, Mathematics and its Applications, vol. 403, Dordrecht, NL: Kluwer Academic Publishers Group, doi:10.1007/978-94-011-5768-1, ISBN 978-0-7923-4513-8, MR 1458894 • Wolfram, Stephen (2003), The Mathematica Book (5th ed.), Champaign, IL: Wolfram Media, ISBN 978-1-57955-022-6 Physics references • Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer, ISBN 0-387-95330-2 • Burgess, Cliff; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press, ISBN 978-0-521-86036-9 • Guenther, Robert D. (1990), Modern Optics, John Wiley, ISBN 0-471-60538-7 • Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory, McGraw–Hill, ISBN 0-07-032071-3 • Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997), Mathematical methods for physics and engineering, Cambridge University Press, ISBN 0-521-55506-X • Schiff, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGraw–Hill • Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations, Cambridge University Press, ISBN 0-521-55001-7 • Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, Prentice–Hall International, ISBN 0-13-365461-3 • Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1 Historical references • A. Cayley A memoir on the theory of matrices. Phil. Trans. 148 1858 17-37; Math. Papers II 475-496 • Bôcher, Maxime (2004), Introduction to higher algebra, New York, NY: Dover Publications, ISBN 978-0-486-49570-5, reprint of the 1907 original edition • Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, vol. I (1841–1853), Cambridge University Press, pp. 123–126 • Dieudonné, Jean, ed. (1978), Abrégé d'histoire des mathématiques 1700-1900, Paris, FR: Hermann • Hawkins, Thomas (1975), "Cauchy and the spectral theory of matrices", Historia Mathematica, 2: 1–29, doi:10.1016/0315-0860(75)90032-4, ISSN 0315-0860, MR 0469635 • Knobloch, Eberhard (1994), "From Gauss to Weierstrass: determinant theory and its historical evaluations", The intersection of history and mathematics, Science Networks Historical Studies, vol. 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66, MR 1308079 • Kronecker, Leopold (1897), Hensel, Kurt (ed.), Leopold Kronecker's Werke, Teubner • Mehra, Jagdish; Rechenberg, Helmut (1987), The Historical Development of Quantum Theory (1st ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96284-9 • Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical Art, Companion and Commentary (2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0 • Weierstrass, Karl (1915), Collected works, vol. 3 Further reading • "Matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Kaw, Autar K. (September 2008), Introduction to Matrix Algebra, ISBN 978-0-615-25126-4 • The Matrix Cookbook (PDF), retrieved 24 March 2014 • Brookes, Mike (2005), The Matrix Reference Manual, London: Imperial College, retrieved 10 Dec 2008 External links • MacTutor: Matrices and determinants • Matrices and Linear Algebra on the Earliest Uses Pages • Earliest Uses of Symbols for Matrices and Vectors Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl Authority control National • Spain • France • BnF data • Germany • Israel • United States • Latvia • Czech Republic Other • Encyclopedia of Modern Ukraine
Wikipedia
Submerged Resources Center The Submerged Resources Center is a unit within the United States National Park Service. The unit is based out of Lakewood, Colorado in the NPS Intermountain Region headquarters.[1] History In 1976, the Submerged Cultural Resources Unit (known as SCRU) was formed and staffed by underwater archeologists, photographers, and service divers to provide the expertise required by managers of national parks with submerged lands. Renamed the Submerged Resources Center in 1999 to include natural resources, the core mission of the program has remained the same: to inventory and evaluate submerged resources in the National Park System and to assist other agencies, nationally and internationally, with underwater resource issues. Projects Projects the SRC have been involved in include work on USS Arizona, the shipwrecks of Isle Royale National Park, and mapping the shipwrecks of Dry Tortugas National Park, among others.[2] Daniel Lenihan, former Chief of the SRC, describes the founding of the unit and many of its exploits in Submerged: Adventures of America's Most Elite Underwater Archeology Team.[3] References 1. Submerged Resources Cente. nps.gov 2. Nimz, J; Clark, T (2012). "Aquatic Research Opportunities with the National Park Service". In Steller D, Lobel L (ed.). Diving for Science 2012. Proceedings of the American Academy of Underwater Sciences 31st Symposium. ISBN 978-0-9800423-6-8. Archived from the original on September 22, 2013.{{cite book}}: CS1 maint: unfit URL (link) 3. Lenihan, Daniel (2002). Submerged: Adventures of America's Most Elite Underwater Archeology Team. New York: Newmarket Press. ISBN 1-55704-505-4.
Wikipedia
Submersion (mathematics) In mathematics, a submersion is a differentiable map between differentiable manifolds whose differential is everywhere surjective. This is a basic concept in differential topology. The notion of a submersion is dual to the notion of an immersion. "Regular point" redirects here. For "regular point of an algebraic variety", see Singular point of an algebraic variety. Definition Let M and N be differentiable manifolds and $f\colon M\to N$ be a differentiable map between them. The map f is a submersion at a point $p\in M$ if its differential $Df_{p}\colon T_{p}M\to T_{f(p)}N$ is a surjective linear map.[1] In this case p is called a regular point of the map f, otherwise, p is a critical point. A point $q\in N$ is a regular value of f if all points p in the preimage $f^{-1}(q)$ are regular points. A differentiable map f that is a submersion at each point $p\in M$ is called a submersion. Equivalently, f is a submersion if its differential $Df_{p}$ has constant rank equal to the dimension of N. A word of warning: some authors use the term critical point to describe a point where the rank of the Jacobian matrix of f at p is not maximal.[2] Indeed, this is the more useful notion in singularity theory. If the dimension of M is greater than or equal to the dimension of N then these two notions of critical point coincide. But if the dimension of M is less than the dimension of N, all points are critical according to the definition above (the differential cannot be surjective) but the rank of the Jacobian may still be maximal (if it is equal to dim M). The definition given above is the more commonly used; e.g., in the formulation of Sard's theorem. Submersion theorem Given a submersion between smooth manifolds $f\colon M\to N$ of dimensions $m$ and $n$, for each $x\in M$ there are surjective charts $\phi :U\to \mathbb {R} ^{m}$ of $M$ around $x$, and $\psi :V\to \mathbb {R} ^{n}$ of $N$ around $f(x)$, such that $f$ restricts to a submersion $f\colon U\to V$ which, when expressed in coordinates as $\psi \circ f\circ \phi ^{-1}:\mathbb {R} ^{m}\to \mathbb {R} ^{n}$, becomes an ordinary orthogonal projection. As an application, for each $p\in N$ the corresponding fiber of $f$, denoted $M_{p}=f^{-1}(\{p\})$ can be equipped with the structure of a smooth submanifold of $M$ whose dimension is equal to the difference of the dimensions of $N$ and $M$. The theorem is a consequence of the inverse function theorem (see Inverse function theorem#Giving a manifold structure). For example, consider $f\colon \mathbb {R} ^{3}\to \mathbb {R} $ given by $f(x,y,z)=x^{4}+y^{4}+z^{4}.$ The Jacobian matrix is ${\begin{bmatrix}{\frac {\partial f}{\partial x}}&{\frac {\partial f}{\partial y}}&{\frac {\partial f}{\partial z}}\end{bmatrix}}={\begin{bmatrix}4x^{3}&4y^{3}&4z^{3}\end{bmatrix}}.$ This has maximal rank at every point except for $(0,0,0)$. Also, the fibers $f^{-1}(\{t\})=\left\{(a,b,c)\in \mathbb {R} ^{3}:a^{4}+b^{4}+c^{4}=t\right\}$ are empty for $t<0$, and equal to a point when $t=0$. Hence we only have a smooth submersion $f\colon \mathbb {R} ^{3}\setminus \{(0,0,0)\}\to \mathbb {R} _{>0},$ and the subsets $M_{t}=\left\{(a,b,c)\in \mathbb {R} ^{3}:a^{4}+b^{4}+c^{4}=t\right\}$ are two-dimensional smooth manifolds for $t>0$. Examples • Any projection $\pi \colon \mathbb {R} ^{m+n}\rightarrow \mathbb {R} ^{n}\subset \mathbb {R} ^{m+n}$ • Local diffeomorphisms • Riemannian submersions • The projection in a smooth vector bundle or a more general smooth fibration. The surjectivity of the differential is a necessary condition for the existence of a local trivialization. Maps between spheres One large class of examples of submersions are submersions between spheres of higher dimension, such as $f:S^{n+k}\to S^{k}$ whose fibers have dimension $n$. This is because the fibers (inverse images of elements $p\in S^{k}$) are smooth manifolds of dimension $n$. Then, if we take a path $\gamma :I\to S^{k}$ and take the pullback ${\begin{matrix}M_{I}&\to &S^{n+k}\\\downarrow &&\downarrow f\\I&\xrightarrow {\gamma } &S^{k}\end{matrix}}$ we get an example of a special kind of bordism, called a framed bordism. In fact, the framed cobordism groups $\Omega _{n}^{fr}$ are intimately related to the stable homotopy groups. Families of algebraic varieties Another large class of submersions are given by families of algebraic varieties $\pi :{\mathfrak {X}}\to S$ :{\mathfrak {X}}\to S} whose fibers are smooth algebraic varieties. If we consider the underlying manifolds of these varieties, we get smooth manifolds. For example, the Weierstauss family $\pi :{\mathcal {W}}\to \mathbb {A} ^{1}$ :{\mathcal {W}}\to \mathbb {A} ^{1}} of elliptic curves is a widely studied submersion because it includes many technical complexities used to demonstrate more complex theory, such as intersection homology and perverse sheaves. This family is given by ${\mathcal {W}}=\{(t,x,y)\in \mathbb {A} ^{1}\times \mathbb {A} ^{2}:y^{2}=x(x-1)(x-t)\}$ where $\mathbb {A} ^{1}$ is the affine line and $\mathbb {A} ^{2}$ is the affine plane. Since we are considering complex varieties, these are equivalently the spaces $\mathbb {C} ,\mathbb {C} ^{2}$ of the complex line and the complex plane. Note that we should actually remove the points $t=0,1$ because there are singularities (since there is a double root). Local normal form If f: M → N is a submersion at p and f(p) = q ∈ N, then there exists an open neighborhood U of p in M, an open neighborhood V of q in N, and local coordinates (x1, …, xm) at p and (x1, …, xn) at q such that f(U) = V, and the map f in these local coordinates is the standard projection $f(x_{1},\ldots ,x_{n},x_{n+1},\ldots ,x_{m})=(x_{1},\ldots ,x_{n}).$ It follows that the full preimage f−1(q) in M of a regular value q in N under a differentiable map f: M → N is either empty or is a differentiable manifold of dimension dim M − dim N, possibly disconnected. This is the content of the regular value theorem (also known as the submersion theorem). In particular, the conclusion holds for all q in N if the map f is a submersion. Topological manifold submersions Submersions are also well-defined for general topological manifolds.[3] A topological manifold submersion is a continuous surjection f : M → N such that for all p in M, for some continuous charts ψ at p and φ at f(p), the map ψ−1 ∘ f ∘ φ is equal to the projection map from Rm to Rn, where m = dim(M) ≥ n = dim(N). See also • Ehresmann's fibration theorem Notes 1. Crampin & Pirani 1994, p. 243. do Carmo 1994, p. 185. Frankel 1997, p. 181. Gallot, Hulin & Lafontaine 2004, p. 12. Kosinski 2007, p. 27. Lang 1999, p. 27. Sternberg 2012, p. 378. 2. Arnold, Gusein-Zade & Varchenko 1985. 3. Lang 1999, p. 27. References • Arnold, Vladimir I.; Gusein-Zade, Sabir M.; Varchenko, Alexander N. (1985). Singularities of Differentiable Maps: Volume 1. Birkhäuser. ISBN 0-8176-3187-9. • Bruce, James W.; Giblin, Peter J. (1984). Curves and Singularities. Cambridge University Press. ISBN 0-521-42999-4. MR 0774048. • Crampin, Michael; Pirani, Felix Arnold Edward (1994). Applicable differential geometry. Cambridge, England: Cambridge University Press. ISBN 978-0-521-23190-9. • do Carmo, Manfredo Perdigao (1994). Riemannian Geometry. ISBN 978-0-8176-3490-2. • Frankel, Theodore (1997). The Geometry of Physics. Cambridge: Cambridge University Press. ISBN 0-521-38753-1. MR 1481707. • Gallot, Sylvestre; Hulin, Dominique; Lafontaine, Jacques (2004). Riemannian Geometry (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-20493-0. • Kosinski, Antoni Albert (2007) [1993]. Differential manifolds. Mineola, New York: Dover Publications. ISBN 978-0-486-46244-8. • Lang, Serge (1999). Fundamentals of Differential Geometry. Graduate Texts in Mathematics. New York: Springer. ISBN 978-0-387-98593-0. • Sternberg, Shlomo Zvi (2012). Curvature in Mathematics and Physics. Mineola, New York: Dover Publications. ISBN 978-0-486-47855-5. Further reading • https://mathoverflow.net/questions/376129/what-are-the-sufficient-and-necessary-conditions-for-surjective-submersions-to-b?rq=1 Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Submodular set function In mathematics, a submodular set function (also known as a submodular function) is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.[1][2][3][4] Definition If $\Omega $ is a finite set, a submodular function is a set function $f:2^{\Omega }\rightarrow \mathbb {R} $, where $2^{\Omega }$ denotes the power set of $\Omega $, which satisfies one of the following equivalent conditions.[5] 1. For every $X,Y\subseteq \Omega $ with $X\subseteq Y$ and every $x\in \Omega \setminus Y$ we have that $f(X\cup \{x\})-f(X)\geq f(Y\cup \{x\})-f(Y)$. 2. For every $S,T\subseteq \Omega $ we have that $f(S)+f(T)\geq f(S\cup T)+f(S\cap T)$. 3. For every $X\subseteq \Omega $ and $x_{1},x_{2}\in \Omega \backslash X$ such that $x_{1}\neq x_{2}$ we have that $f(X\cup \{x_{1}\})+f(X\cup \{x_{2}\})\geq f(X\cup \{x_{1},x_{2}\})+f(X)$. A nonnegative submodular function is also a subadditive function, but a subadditive function need not be submodular. If $\Omega $ is not assumed finite, then the above conditions are not equivalent. In particular a function $f$ defined by $f(S)=1$ if $S$ is finite and $f(S)=0$ if $S$ is infinite satisfies the first condition above, but the second condition fails when $S$ and $T$ are infinite sets with finite intersection. Types and examples of submodular functions Monotone A set function $f$ is monotone if for every $T\subseteq S$ we have that $f(T)\leq f(S)$. Examples of monotone submodular functions include: Linear (Modular) functions Any function of the form $f(S)=\sum _{i\in S}w_{i}$ is called a linear function. Additionally if $\forall i,w_{i}\geq 0$ then f is monotone. Budget-additive functions Any function of the form $f(S)=\min \left\{B,~\sum _{i\in S}w_{i}\right\}$ for each $w_{i}\geq 0$ and $B\geq 0$ is called budget additive.[6] Coverage functions Let $\Omega =\{E_{1},E_{2},\ldots ,E_{n}\}$ be a collection of subsets of some ground set $\Omega '$. The function $f(S)=\left|\bigcup _{E_{i}\in S}E_{i}\right|$ for $S\subseteq \Omega $ is called a coverage function. This can be generalized by adding non-negative weights to the elements. Entropy Let $\Omega =\{X_{1},X_{2},\ldots ,X_{n}\}$ be a set of random variables. Then for any $S\subseteq \Omega $ we have that $H(S)$ is a submodular function, where $H(S)$ is the entropy of the set of random variables $S$, a fact known as Shannon's inequality.[7] Further inequalities for the entropy function are known to hold, see entropic vector. Matroid rank functions Let $\Omega =\{e_{1},e_{2},\dots ,e_{n}\}$ be the ground set on which a matroid is defined. Then the rank function of the matroid is a submodular function.[8] Non-monotone A submodular function that is not monotone is called non-monotone. Symmetric A non-monotone submodular function $f$ is called symmetric if for every $S\subseteq \Omega $ we have that $f(S)=f(\Omega -S)$. Examples of symmetric non-monotone submodular functions include: Graph cuts Let $\Omega =\{v_{1},v_{2},\dots ,v_{n}\}$ be the vertices of a graph. For any set of vertices $S\subseteq \Omega $ let $f(S)$ denote the number of edges $e=(u,v)$ such that $u\in S$ and $v\in \Omega -S$. This can be generalized by adding non-negative weights to the edges. Mutual information Let $\Omega =\{X_{1},X_{2},\ldots ,X_{n}\}$ be a set of random variables. Then for any $S\subseteq \Omega $ we have that $f(S)=I(S;\Omega -S)$ is a submodular function, where $I(S;\Omega -S)$ is the mutual information. Asymmetric A non-monotone submodular function which is not symmetric is called asymmetric. Directed cuts Let $\Omega =\{v_{1},v_{2},\dots ,v_{n}\}$ be the vertices of a directed graph. For any set of vertices $S\subseteq \Omega $ let $f(S)$ denote the number of edges $e=(u,v)$ such that $u\in S$ and $v\in \Omega -S$. This can be generalized by adding non-negative weights to the directed edges. Continuous extensions Definition A set function $f:2^{\Omega }\rightarrow \mathbb {R} $ with $|\Omega |=n$ can also be represented as a function on $\{0,1\}^{n}$, by associating each $S\subseteq \Omega $ with a binary vector $x^{S}\in \{0,1\}^{n}$ such that $x_{i}^{S}=1$ when $i\in S$, and $x_{i}^{S}=0$ otherwise. The continuous extension of $f$ is defined to be any continuous function $F:[0,1]^{n}\rightarrow \mathbb {R} $ such that it matches the value of $f$ on $x\in \{0,1\}^{n}$, i.e. $F(x^{S})=f(S)$. In the context of submodular functions, there are a few examples of continuous extensions that are commonly used, which are described as follows. Lovász extension This extension is named after mathematician László Lovász.[9] Consider any vector $\mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}$ such that each $0\leq x_{i}\leq 1$. Then the Lovász extension is defined as $f^{L}(\mathbf {x} )=\mathbb {E} (f(\{i|x_{i}\geq \lambda \}))$ where the expectation is over $\lambda $ chosen from the uniform distribution on the interval $[0,1]$. The Lovász extension is a convex function if and only if $f$ is a submodular function. Multilinear extension Consider any vector $\mathbf {x} =\{x_{1},x_{2},\ldots ,x_{n}\}$ such that each $0\leq x_{i}\leq 1$. Then the multilinear extension is defined as $F(\mathbf {x} )=\sum _{S\subseteq \Omega }f(S)\prod _{i\in S}x_{i}\prod _{i\notin S}(1-x_{i})$. Convex closure Consider any vector $\mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}$ such that each $0\leq x_{i}\leq 1$. Then the convex closure is defined as $f^{-}(\mathbf {x} )=\min \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)$. The convex closure of any set function is convex over $[0,1]^{n}$. Concave closure Consider any vector $\mathbf {x} =\{x_{1},x_{2},\dots ,x_{n}\}$ such that each $0\leq x_{i}\leq 1$. Then the concave closure is defined as $f^{+}(\mathbf {x} )=\max \left(\sum _{S}\alpha _{S}f(S):\sum _{S}\alpha _{S}1_{S}=\mathbf {x} ,\sum _{S}\alpha _{S}=1,\alpha _{S}\geq 0\right)$. Connections between extensions For the extensions discussed above, it can be shown that $f^{+}(\mathbf {x} )\geq F(\mathbf {x} )\geq f^{-}(\mathbf {x} )=f^{L}(\mathbf {x} )$ when $f$ is submodular.[10] Properties 1. The class of submodular functions is closed under non-negative linear combinations. Consider any submodular function $f_{1},f_{2},\ldots ,f_{k}$ and non-negative numbers $\alpha _{1},\alpha _{2},\ldots ,\alpha _{k}$. Then the function $g$ defined by $g(S)=\sum _{i=1}^{k}\alpha _{i}f_{i}(S)$ is submodular. 2. For any submodular function $f$, the function defined by $g(S)=f(\Omega \setminus S)$ is submodular. 3. The function $g(S)=\min(f(S),c)$, where $c$ is a real number, is submodular whenever $f$ is monotone submodular. More generally, $g(S)=h(f(S))$ is submodular, for any non decreasing concave function $h$. 4. Consider a random process where a set $T$ is chosen with each element in $\Omega $ being included in $T$ independently with probability $p$. Then the following inequality is true $\mathbb {E} [f(T)]\geq pf(\Omega )+(1-p)f(\varnothing )$ where $\varnothing $ is the empty set. More generally consider the following random process where a set $S$ is constructed as follows. For each of $1\leq i\leq l,A_{i}\subseteq \Omega $ construct $S_{i}$ by including each element in $A_{i}$ independently into $S_{i}$ with probability $p_{i}$. Furthermore let $S=\cup _{i=1}^{l}S_{i}$. Then the following inequality is true $\mathbb {E} [f(S)]\geq \sum _{R\subseteq [l]}\Pi _{i\in R}p_{i}\Pi _{i\notin R}(1-p_{i})f(\cup _{i\in R}A_{i})$. Optimization problems Submodular functions have properties which are very similar to convex and concave functions. For this reason, an optimization problem which concerns optimizing a convex or concave function can also be described as the problem of maximizing or minimizing a submodular function subject to some constraints. Submodular set function minimization The hardness of minimizing a submodular set function depends on constraints imposed on the problem. 1. The unconstrained problem of minimizing a submodular function is computable in polynomial time,[11][12] and even in strongly-polynomial time.[13][14] Computing the minimum cut in a graph is a special case of this minimization problem. 2. The problem of minimizing a submodular function with a cardinality lower bound is NP-hard, with polynomial factor lower bounds on the approximation factor.[15][16] Submodular set function maximization Unlike the case of minimization, maximizing a generic submodular function is NP-hard even in the unconstrained setting. Thus, most of the works in this field are concerned with polynomial-time approximation algorithms, including greedy algorithms or local search algorithms. 1. The problem of maximizing a non-negative submodular function admits a 1/2 approximation algorithm.[17][18] Computing the maximum cut of a graph is a special case of this problem. 2. The problem of maximizing a monotone submodular function subject to a cardinality constraint admits a $1-1/e$ approximation algorithm.[19][20] The maximum coverage problem is a special case of this problem. 3. The problem of maximizing a monotone submodular function subject to a matroid constraint (which subsumes the case above) also admits a $1-1/e$ approximation algorithm.[21][22][23] Many of these algorithms can be unified within a semi-differential based framework of algorithms.[16] Related optimization problems Apart from submodular minimization and maximization, there are several other natural optimization problems related to submodular functions. 1. Minimizing the difference between two submodular functions[24] is not only NP hard, but also inapproximable.[25] 2. Minimization/maximization of a submodular function subject to a submodular level set constraint (also known as submodular optimization subject to submodular cover or submodular knapsack constraint) admits bounded approximation guarantees.[26] 3. Partitioning data based on a submodular function to maximize the average welfare is known as the submodular welfare problem, which also admits bounded approximation guarantees (see welfare maximization). Applications Submodular functions naturally occur in several real world applications, in economics, game theory, machine learning and computer vision.[4][27] Owing to the diminishing returns property, submodular functions naturally model costs of items, since there is often a larger discount, with an increase in the items one buys. Submodular functions model notions of complexity, similarity and cooperation when they appear in minimization problems. In maximization problems, on the other hand, they model notions of diversity, information and coverage. See also • Supermodular function • Matroid, Polymatroid • Utility functions on indivisible goods Citations 1. H. Lin and J. Bilmes, A Class of Submodular Functions for Document Summarization, ACL-2011. 2. S. Tschiatschek, R. Iyer, H. Wei and J. Bilmes, Learning Mixtures of Submodular Functions for Image Collection Summarization, NIPS-2014. 3. A. Krause and C. Guestrin, Near-optimal nonmyopic value of information in graphical models, UAI-2005. 4. A. Krause and C. Guestrin, Beyond Convexity: Submodularity in Machine Learning, Tutorial at ICML-2008 5. (Schrijver 2003, §44, p. 766) 6. Buchbinder, Niv; Feldman, Moran (2018). "Submodular Functions Maximization Problems". In Gonzalez, Teofilo F. (ed.). Handbook of Approximation Algorithms and Metaheuristics, Second Edition: Methodologies and Traditional Applications. Chapman and Hall/CRC. doi:10.1201/9781351236423. ISBN 9781351236423. 7. "Information Processing and Learning" (PDF). cmu. 8. Fujishige (2005) p.22 9. Lovász, L. (1983). "Submodular functions and convexity". Mathematical Programming the State of the Art: 235–257. doi:10.1007/978-3-642-68874-4_10. ISBN 978-3-642-68876-8. 10. Vondrák, Jan. "Polyhedral techniques in combinatorial optimization: Lecture 17" (PDF). 11. Grötschel, M.; Lovasz, L.; Schrijver, A. (1981). "The ellipsoid method and its consequences in combinatorial optimization". Combinatorica. 1 (2): 169–197. doi:10.1007/BF02579273. hdl:10068/182482. S2CID 43787103. 12. Cunningham, W. H. (1985). "On submodular function minimization". Combinatorica. 5 (3): 185–192. doi:10.1007/BF02579361. S2CID 33192360. 13. Iwata, S.; Fleischer, L.; Fujishige, S. (2001). "A combinatorial strongly polynomial algorithm for minimizing submodular functions". J. ACM. 48 (4): 761–777. doi:10.1145/502090.502096. S2CID 888513. 14. Schrijver, A. (2000). "A combinatorial algorithm minimizing submodular functions in strongly polynomial time". J. Combin. Theory Ser. B. 80 (2): 346–355. doi:10.1006/jctb.2000.1989. 15. Z. Svitkina and L. Fleischer, Submodular approximation: Sampling-based algorithms and lower bounds, SIAM Journal on Computing (2011). 16. R. Iyer, S. Jegelka and J. Bilmes, Fast Semidifferential based submodular function optimization, Proc. ICML (2013). 17. U. Feige, V. Mirrokni and J. Vondrák, Maximizing non-monotone submodular functions, Proc. of 48th FOCS (2007), pp. 461–471. 18. N. Buchbinder, M. Feldman, J. Naor and R. Schwartz, A tight linear time (1/2)-approximation for unconstrained submodular maximization, Proc. of 53rd FOCS (2012), pp. 649-658. 19. Nemhauser, George; Wolsey, L. A.; Fisher, M. L. (1978). "An analysis of approximations for maximizing submodular set functions I". Mathematical Programming. 14 (14): 265–294. doi:10.1007/BF01588971. S2CID 206800425. 20. Williamson, David P. "Bridging Continuous and Discrete Optimization: Lecture 23" (PDF). 21. G. Calinescu, C. Chekuri, M. Pál and J. Vondrák, Maximizing a submodular set function subject to a matroid constraint, SIAM J. Comp. 40:6 (2011), 1740-1766. 22. M. Feldman, J. Naor and R. Schwartz, A unified continuous greedy algorithm for submodular maximization, Proc. of 52nd FOCS (2011). 23. Y. Filmus, J. Ward, A tight combinatorial algorithm for submodular maximization subject to a matroid constraint, Proc. of 53rd FOCS (2012), pp. 659-668. 24. M. Narasimhan and J. Bilmes, A submodular-supermodular procedure with applications to discriminative structure learning, In Proc. UAI (2005). 25. R. Iyer and J. Bilmes, Algorithms for Approximate Minimization of the Difference between Submodular Functions, In Proc. UAI (2012). 26. R. Iyer and J. Bilmes, Submodular Optimization Subject to Submodular Cover and Submodular Knapsack Constraints, In Advances of NIPS (2013). 27. J. Bilmes, Submodularity in Machine Learning Applications, Tutorial at AAAI-2015. References • Schrijver, Alexander (2003), Combinatorial Optimization, Springer, ISBN 3-540-44389-4 • Lee, Jon (2004), A First Course in Combinatorial Optimization, Cambridge University Press, ISBN 0-521-01012-8 • Fujishige, Satoru (2005), Submodular Functions and Optimization, Elsevier, ISBN 0-444-52086-4 • Narayanan, H. (1997), Submodular Functions and Electrical Networks, ISBN 0-444-82523-1 • Oxley, James G. (1992), Matroid theory, Oxford Science Publications, Oxford: Oxford University Press, ISBN 0-19-853563-5, Zbl 0784.05002 External links • http://www.cs.berkeley.edu/~stefje/references.html has a longer bibliography • http://submodularity.org/ includes further material on the subject
Wikipedia
Submodular flow In the theory of combinatorial optimization, submodular flow is a general class of optimization problems that includes as special cases the minimum-cost flow problem, matroid intersection, and the problem of computing a minimum-weight dijoin in a weighted directed graph. It was originally formulated by Jack Edmonds and Rick Giles,[1] and can be solved in polynomial time.[2][3][4] In the classical minimum-cost flow problem, the input is a flow network, with given capacities that specify lower and upper limits on the amount of flow per edge, as well as costs per unit flow along each edge. The goal is to find a system of flow amounts that obey the capacities on each edge, obey Kirchhoff's law that the total amount of flow into each vertex equals the total amount of flow out, and have minimum total cost. In submodular flow, as well, one is given a submodular set function on sets of vertices of the graph. Instead of obeying Kirchhoff's law, it is a requirement that, for every vertex set, the excess flow (the function mapping the set to its difference between flow in and flow out) can be at most the value given by the submodular function.[4] References 1. Edmonds, Jack; Giles, Rick (1977), "A min-max relation for submodular functions on graphs", Studies in integer programming (Proc. Workshop, Bonn, 1975), Annals of Discrete Mathematics, vol. 1, North-Holland, Amsterdam, pp. 185–204, MR 0460169 2. Grötschel, M.; Lovász, L.; Schrijver, A. (1981), "The ellipsoid method and its consequences in combinatorial optimization", Combinatorica, 1 (2): 169–197, doi:10.1007/BF02579273, MR 0625550 3. Gabow, Harold N. (1993), "A framework for cost-scaling algorithms for submodular flow problems", Proceedings of the 34th Annual Symposium on Foundations of Computer Science (FOCS), Palo Alto, California, USA, 3-5 November 1993, IEEE Computer Society, pp. 449–458, doi:10.1109/SFCS.1993.366842 4. Fleischer, Lisa; Iwata, Satoru (2000), "Improved algorithms for submodular function minimization and submodular flow", Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, Association for Computing Machinery, pp. 107–116, doi:10.1145/335305.335318, MR 2114523
Wikipedia
Subnormal subgroup In mathematics, in the field of group theory, a subgroup H of a given group G is a subnormal subgroup of G if there is a finite chain of subgroups of the group, each one normal in the next, beginning at H and ending at G. In notation, $H$ is $k$-subnormal in $G$ if there are subgroups $H=H_{0},H_{1},H_{2},\ldots ,H_{k}=G$ of $G$ such that $H_{i}$ is normal in $H_{i+1}$ for each $i$. A subnormal subgroup is a subgroup that is $k$-subnormal for some positive integer $k$. Some facts about subnormal subgroups: • A 1-subnormal subgroup is a proper normal subgroup (and vice versa). • A finitely generated group is nilpotent if and only if each of its subgroups is subnormal. • Every quasinormal subgroup, and, more generally, every conjugate-permutable subgroup, of a finite group is subnormal. • Every pronormal subgroup that is also subnormal, is normal. In particular, a Sylow subgroup is subnormal if and only if it is normal. • Every 2-subnormal subgroup is a conjugate-permutable subgroup. The property of subnormality is transitive, that is, a subnormal subgroup of a subnormal subgroup is subnormal. The relation of subnormality can be defined as the transitive closure of the relation of normality. If every subnormal subgroup of G is normal in G, then G is called a T-group. See also • Characteristic subgroup • Normal core • Normal closure • Ascendant subgroup • Descendant subgroup • Serial subgroup References • Robinson, Derek J.S. (1996), A Course in the Theory of Groups, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94461-6 • Ballester-Bolinches, Adolfo; Esteban-Romero, Ramon; Asaad, Mohamed (2010), Products of Finite Groups, Walter de Gruyter, ISBN 978-3-11-022061-2
Wikipedia
Subobject In category theory, a branch of mathematics, a subobject is, roughly speaking, an object that sits inside another object in the same category. The notion is a generalization of concepts such as subsets from set theory, subgroups from group theory,[1] and subspaces from topology. Since the detailed structure of objects is immaterial in category theory, the definition of subobject relies on a morphism that describes how one object sits inside another, rather than relying on the use of elements. The dual concept to a subobject is a quotient object. This generalizes concepts such as quotient sets, quotient groups, quotient spaces, quotient graphs, etc. Definitions An appropriate categorical definition of "subobject" may vary with context, depending on the goal. One common definition is as follows. In detail, let $A$ be an object of some category. Given two monomorphisms $u:S\to A\ {\text{and}}\ v:T\to A$ with codomain $A$, we define an equivalence relation by $u\equiv v$ if there exists an isomorphism $\phi :S\to T$ with $u=v\circ \phi $. Equivalently, we write $u\leq v$ if $u$ factors through $v$—that is, if there exists $\phi :S\to T$ such that $u=v\circ \phi $. The binary relation $\equiv $ defined by $u\equiv v\iff u\leq v\ {\text{and}}\ v\leq u$ is an equivalence relation on the monomorphisms with codomain $A$, and the corresponding equivalence classes of these monomorphisms are the subobjects of $A$. The relation ≤ induces a partial order on the collection of subobjects of $A$. The collection of subobjects of an object may in fact be a proper class; this means that the discussion given is somewhat loose. If the subobject-collection of every object is a set, the category is called well-powered or, rarely, locally small (this clashes with a different usage of the term locally small, namely that there is a set of morphisms between any two objects). To get the dual concept of quotient object, replace "monomorphism" by "epimorphism" above and reverse arrows. A quotient object of A is then an equivalence class of epimorphisms with domain A. However, in some contexts these definitions are inadequate as they do not concord with well-established notions of subobject or quotient object. In the category of topological spaces, monomorphisms are precisely the injective continuous functions; but not all injective continuous functions are subspace embeddings. In the category of rings, the inclusion $\mathbb {Z} \hookrightarrow \mathbb {Q} $ is an epimorphism but is not the quotient of $\mathbb {Z} $ by a two-sided ideal. To get maps which truly behave like subobject embeddings or quotients, rather than as arbitrary injective functions or maps with dense image, one must restrict to monomorphisms and epimorphisms satisfying additional hypotheses. Therefore one might define a "subobject" to be an equivalence class of so-called "regular monomorphisms" (monomorphisms which can be expressed as an equalizer of two morphisms) and a "quotient object" to be any equivalence class of "regular epimorphisms" (morphisms which can be expressed as a coequalizer of two morphisms) Interpretation This definition corresponds to the ordinary understanding of a subobject outside category theory. When the category's objects are sets (possibly with additional structure, such as a group structure) and the morphisms are set functions (preserving the additional structure), one thinks of a monomorphism in terms of its image. An equivalence class of monomorphisms is determined by the image of each monomorphism in the class; that is, two monomorphisms f and g into an object T are equivalent if and only if their images are the same subset (thus, subobject) of T. In that case there is the isomorphism $g^{-1}\circ f$ of their domains under which corresponding elements of the domains map by f and g, respectively, to the same element of T; this explains the definition of equivalence. Examples In Set, the category of sets, a subobject of A corresponds to a subset B of A, or rather the collection of all maps from sets equipotent to B with image exactly B. The subobject partial order of a set in Set is just its subset lattice. In Grp, the category of groups, the subobjects of A correspond to the subgroups of A. Given a partially ordered class P = (P, ≤), we can form a category with the elements of P as objects, and a single arrow from p to q iff p ≤ q. If P has a greatest element, the subobject partial order of this greatest element will be P itself. This is in part because all arrows in such a category will be monomorphisms. A subobject of a terminal object is called a subterminal object. See also • Subobject classifier • Subquotient Notes 1. Mac Lane, p. 126 References • Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics, vol. 5 (2nd ed.), New York, NY: Springer-Verlag, ISBN 0-387-98403-8, Zbl 0906.18001 • Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
Wikipedia
Littlewood subordination theorem In mathematics, the Littlewood subordination theorem, proved by J. E. Littlewood in 1925, is a theorem in operator theory and complex analysis. It states that any holomorphic univalent self-mapping of the unit disk in the complex numbers that fixes 0 induces a contractive composition operator on various function spaces of holomorphic functions on the disk. These spaces include the Hardy spaces, the Bergman spaces and Dirichlet space. Subordination theorem Let h be a holomorphic univalent mapping of the unit disk D into itself such that h(0) = 0. Then the composition operator Ch defined on holomorphic functions f on D by $C_{h}(f)=f\circ h$ defines a linear operator with operator norm less than 1 on the Hardy spaces $H^{p}(D)$, the Bergman spaces $A^{p}(D)$. (1 ≤ p < ∞) and the Dirichlet space ${\mathcal {D}}(D)$. The norms on these spaces are defined by: $\|f\|_{H^{p}}^{p}=\sup _{r}{1 \over 2\pi }\int _{0}^{2\pi }|f(re^{i\theta })|^{p}\,d\theta $ $\|f\|_{A^{p}}^{p}={1 \over \pi }\iint _{D}|f(z)|^{p}\,dx\,dy$ $\|f\|_{\mathcal {D}}^{2}={1 \over \pi }\iint _{D}|f^{\prime }(z)|^{2}\,dx\,dy={1 \over 4\pi }\iint _{D}|\partial _{x}f|^{2}+|\partial _{y}f|^{2}\,dx\,dy$ Littlewood's inequalities Let f be a holomorphic function on the unit disk D and let h be a holomorphic univalent mapping of D into itself with h(0) = 0. Then if 0 < r < 1 and 1 ≤ p < ∞ $\int _{0}^{2\pi }|f(h(re^{i\theta }))|^{p}\,d\theta \leq \int _{0}^{2\pi }|f(re^{i\theta })|^{p}\,d\theta .$ This inequality also holds for 0 < p < 1, although in this case there is no operator interpretation. Proofs Case p = 2 To prove the result for H2 it suffices to show that for f a polynomial[1] $\displaystyle {\|C_{h}f\|^{2}\leq \|f\|^{2},}$ Let U be the unilateral shift defined by $\displaystyle {Uf(z)=zf(z)}.$ This has adjoint U* given by $U^{*}f(z)={f(z)-f(0) \over z}.$ Since f(0) = a0, this gives $f=a_{0}+zU^{*}f$ and hence $C_{h}f=a_{0}+hC_{h}U^{*}f.$ Thus $\|C_{h}f\|^{2}=|a_{0}|^{2}+\|hC_{h}U^{*}f\|^{2}\leq |a_{0}^{2}|+\|C_{h}U^{*}f\|^{2}.$ Since U*f has degree less than f, it follows by induction that $\|C_{h}U^{*}f\|^{2}\leq \|U^{*}f\|^{2}=\|f\|^{2}-|a_{0}|^{2},$ and hence $\|C_{h}f\|^{2}\leq \|f\|^{2}.$ The same method of proof works for A2 and ${\mathcal {D}}.$ General Hardy spaces If f is in Hardy space Hp, then it has a factorization[2] $f(z)=f_{i}(z)f_{o}(z)$ with fi an inner function and fo an outer function. Then $\|C_{h}f\|_{H^{p}}\leq \|(C_{h}f_{i})(C_{h}f_{o})\|_{H^{p}}\leq \|C_{h}f_{o}\|_{H^{p}}\leq \|C_{h}f_{o}^{p/2}\|_{H^{2}}^{2/p}\leq \|f\|_{H^{p}}.$ Inequalities Taking 0 < r < 1, Littlewood's inequalities follow by applying the Hardy space inequalities to the function $f_{r}(z)=f(rz).$ The inequalities can also be deduced, following Riesz (1925), using subharmonic functions.[3][4] The inequaties in turn immediately imply the subordination theorem for general Bergman spaces. Notes 1. Nikolski 2002, pp. 56–57 2. Nikolski 2002, p. 57 3. Duren 1970 4. Shapiro 1993, p. 19 References • Duren, P. L. (1970), Theory of H p spaces, Pure and Applied Mathematics, vol. 38, Academic Press • Littlewood, J. E. (1925), "On inequalities in the theory of functions", Proc. London Math. Soc., 23: 481–519, doi:10.1112/plms/s2-23.1.481 • Nikolski, N. K. (2002), Operators, functions, and systems: an easy reading. Vol. 1. Hardy, Hankel, and Toeplitz, Mathematical Surveys and Monographs, vol. 92, American Mathematical Society, ISBN 0-8218-1083-9 • Riesz, F. (1925), "Sur une inégalite de M. Littlewood dans la théorie des fonctions", Proc. London Math. Soc., 23: 36–39, doi:10.1112/plms/s2-23.1.1-s • Shapiro, J. H. (1993), Composition operators and classical function theory, Universitext: Tracts in Mathematics, Springer-Verlag, ISBN 0-387-94067-7
Wikipedia
Subordinator (mathematics) In probability theory, a subordinator is a stochastic process that is non-negative and whose increments are stationary and independent.[1] Subordinators are a special class of Lévy process that play an important role in the theory of local time.[2] In this context, subordinators describe the evolution of time within another stochastic process, the subordinated stochastic process. In other words, a subordinator will determine the random number of "time steps" that occur within the subordinated process for a given unit of chronological time. In order to be a subordinator a process must be a Lévy process[3] It also must be increasing, almost surely,[3] or an additive process.[4] Definition A subordinator is a real-valued stochastic process $X=(X_{t})_{t\geq 0}$ that is a non-negative and a Lévy process.[1] Subordinators are the stochastic processes $X=(X_{t})_{t\geq 0}$ that have all of the following properties: • $X_{0}=0$ almost surely • $X$ is non-negative, meaning $X_{t}\geq 0$ for all $t$ • $X$ has stationary increments, meaning that for $t\geq 0$ and $h>0$, the distribution of the random variable $Y_{t,h}:=X_{t+h}-X_{t}$ depends only on $h$ and not on $t$ • $X$ has independent increments, meaning that for all $n$ and all $t_{0}<t_{1}<\dots <t_{n}$ , the random variables $(Y_{i})_{i=0,\dots ,n-1}$ defined by $Y_{i}=X_{t_{i+1}}-X_{t_{i}}$ are independent of each other • The paths of $X$ are càdlàg, meaning they are continuous from the right everywhere and the limits from the left exist everywhere Examples The variance gamma process can be described as a Brownian motion subject to a gamma subordinator.[3] If a Brownian motion, $W(t)$, with drift $\theta t$ is subjected to a random time change which follows a gamma process, $\Gamma (t;1,\nu )$, the variance gamma process will follow: $X^{VG}(t;\sigma ,\nu ,\theta )\;:=\;\theta \,\Gamma (t;1,\nu )+\sigma \,W(\Gamma (t;1,\nu )).$ The Cauchy process can be described as a Brownian motion subject to a Lévy subordinator.[3] Representation Every subordinator $X=(X_{t})_{t\geq 0}$ can be written as $X_{t}=at+\int _{0}^{t}\int _{0}^{\infty }x\;\Theta (\mathrm {d} s\;\mathrm {d} x)$ where • $a\geq 0$ is a scalar and • $\Theta $ is a Poisson process on $(0,\infty )\times (0,\infty )$ with intensity measure $\operatorname {E} \Theta =\lambda \otimes \mu $. Here $\mu $ is a measure on $(0,\infty )$ with $\int _{0}^{\infty }\max(x,1)\;\mu (\mathrm {d} x)<\infty $, and $\lambda $ is the Lebesgue measure. The measure $\mu $ is called the Lévy measure of the subordinator, and the pair $(a,\mu )$ is called the characteristics of the subordinator. Conversely, any scalar $a\geq 0$ and measure $\mu $ on $(0,\infty )$ with $\int \max(x,1)\;\mu (\mathrm {d} x)<\infty $ define a subordinator with characteristics $(a,\mu )$ by the above relation.[5][1] References 1. Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 290. 2. Kallenberg, Olav (2017). Random Measures, Theory and Applications. Switzerland: Springer. p. 651. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3. 3. Applebaum, D. "Lectures on Lévy processes and Stochastic calculus, Braunschweig; Lecture 2: Lévy processes" (PDF). University of Sheffield. pp. 37–53. 4. Li, Jing; Li, Lingfei; Zhang, Gongqiu (2017). "Pure jump models for pricing and hedging VIX derivatives". Journal of Economic Dynamics and Control. 74. doi:10.1016/j.jedc.2016.11.001. 5. Kallenberg, Olav (2002). Foundations of Modern Probability (2nd ed.). New York: Springer. p. 287.
Wikipedia
Subpaving In mathematics, a subpaving is a set of nonoverlapping boxes of R⁺. A subset X of Rⁿ can be approximated by two subpavings X⁻ and X⁺ such that  X⁻ ⊂ X ⊂ X⁺. In R¹ the boxes are line segments, in R² rectangles and in Rⁿ hyperrectangles. A R² subpaving can be also a "non-regular tiling by rectangles", when it has no holes. Boxes present the advantage of being very easily manipulated by computers, as they form the heart of interval analysis. Many interval algorithms naturally provide solutions that are regular subpavings.[1] In computation, a well-known application of subpaving in R² is the Quadtree data structure. In image tracing context and other applications is important to see X⁻ as topological interior, as illustrated. Example The three figures on the right below show an approximation of the set   X = {(x1, x2) ∈ R2 | x2 1 + x2 2 + sin(x1 + x2) ∈ [4,9]} with different accuracies. The set X⁻ corresponds to red boxes and the set X⁺ contains all red and yellow boxes. Combined with interval-based methods, subpavings are used to approximate the solution set of non-linear problems such as set inversion problems.[2] Subpavings can also be used to prove that a set defined by nonlinear inequalities is path connected,[3] to provide topological properties of such sets,[4] to solve piano-mover's problems[5] or to implement set computation.[6] References 1. Kieffer, M.; Braems, I.; Walter, É.; Jaulin, L. (2001). "Guaranteed Set Computation with Subpavings" (PDF). Scientific Computing, Validated Numerics, Interval Methods: 167–172. doi:10.1007/978-1-4757-6484-0_14. ISBN 978-1-4419-3376-8. 2. Jaulin, Luc; Walter, Eric (1993). "Set inversion via interval analysis for nonlinear bounded-error estimation" (PDF). Automatica. 29 (4): 1053–1064. doi:10.1016/0005-1098(93)90106-4. 3. Delanoue, N.; Jaulin, L.; Cottenceau, B. (2005). "Using interval arithmetic to prove that a set is path-connected" (PDF). Theoretical Computer Science. 351 (1). 4. Delanoue, N.; Jaulin, L.; Cottenceau, B. (2006). "Counting the Number of Connected Components of a Set and Its Application to Robotics" (PDF). Applied Parallel Computing, Lecture Notes in Computer Science. Lecture Notes in Computer Science. 3732 (1): 93–101. doi:10.1007/11558958_11. ISBN 978-3-540-29067-4. 5. Jaulin, L. (2001). "Path planning using intervals and graphs" (PDF). Reliable Computing. 7 (1). 6. Kieffer, M.; Jaulin, L.; Braems, I.; Walter, E. (2001). "Guaranteed set computation with subpavings" (PDF). In W. Kraemer and J. W. Gudenberg (Eds), Scientific Computing, Validated Numerics, Interval Methods, Kluwer Academic Publishers: 167–178. doi:10.1007/978-1-4757-6484-0_14. ISBN 978-1-4419-3376-8.
Wikipedia
Subquotient In the mathematical fields of category theory and abstract algebra, a subquotient is a quotient object of a subobject. Subquotients are particularly important in abelian categories, and in group theory, where they are also known as sections, though this conflicts with a different meaning in category theory. In the literature about sporadic groups wordings like "$H$ is involved in $G$"[1] can be found with the apparent meaning of "$H$ is a subquotient of $G$." A quotient of a subrepresentation of a representation (of, say, a group) might be called a subquotient representation; e.g., Harish-Chandra's subquotient theorem.[2] Examples Of the 26 sporadic groups, the 20 subquotients of the monster group are referred to as the "Happy Family", whereas the remaining 6 are called "pariah groups." Order relation The relation subquotient of is an order relation. Proof of transitivity for groups Notation For group $G$, subgroup $G'$ of $G$ $(\Leftrightarrow :G\geq G')$ and normal subgroup $G''$ of $G'$ $(\Leftrightarrow :G'\vartriangleright G'')$ the quotient group $H:=G'/G''$ is a subquotient of $G.$ Let $H'/H''$ be subquotient of $H$, furthermore $H:=G'/G''$ be subquotient of $G$ and $\varphi \colon G'\to H$ be the canonical homomorphism. Then all vertical ($\downarrow $) maps $\varphi \colon X\to Y,\;g\mapsto g\,G''$ $G$$\geq $$G'$$\geq $$\varphi ^{-1}(H')$$\geq $$\varphi ^{-1}(H'')$$\vartriangleright $$G''$ $\varphi \!:$${\Big \downarrow }$${\Big \downarrow }$${\Big \downarrow }$${\Big \downarrow }$ $H$$\geq $$H'$$\vartriangleright $$H''$$\vartriangleright $$\{1\}$ with suitable $g\in X$ are surjective for the respective pairs $(X,Y)\;\;\;\in $${\Bigl \{}{\bigl (}G',H{\bigr )}{\Bigr .}$$,$${\bigl (}\varphi ^{-1}(H'),H'{\bigr )}$$,$${\bigl (}\varphi ^{-1}(H''),H''{\bigr )}$$,$${\Bigl .}{\bigl (}G'',\{1\}{\bigr )}{\Bigr \}}.$ The preimages $\varphi ^{-1}\left(H'\right)$ and $\varphi ^{-1}\left(H''\right)$ are both subgroups of $G'$ containing $G'',$ and it is $\varphi \left(\varphi ^{-1}\left(H'\right)\right)=H'$ and $\varphi \left(\varphi ^{-1}\left(H''\right)\right)=H''$, because every $h\in H$ has a preimage $g\in G'$ with $\varphi (g)=h.$ Moreover, the subgroup $\varphi ^{-1}\left(H''\right)$ is normal in $\varphi ^{-1}\left(H'\right).$ As a consequence, the subquotient $H'/H''$ of $H$ is a subquotient of $G$ in the form $H'/H''\cong \varphi ^{-1}\left(H'\right)/\varphi ^{-1}\left(H''\right).$ Relation to cardinal order In constructive set theory, where the law of excluded middle does not necessarily hold, one can consider the relation subquotient of as replacing the usual order relation(s) on cardinals. When one has the law of the excluded middle, then a subquotient $Y$ of $X$ is either the empty set or there is an onto function $X\to Y$. This order relation is traditionally denoted $\leq ^{\ast }.$ If additionally the axiom of choice holds, then $Y$ has a one-to-one function to $X$ and this order relation is the usual $\leq $ on corresponding cardinals. See also • Homological algebra • Subcountable References 1. Griess, Robert L. (1982), "The Friendly Giant", Inventiones Mathematicae, 69: 1−102, Bibcode:1982InMat..69....1G, doi:10.1007/BF01389186, hdl:2027.42/46608, S2CID 123597150 2. Dixmier, Jacques (1996) [1974], Enveloping algebras, Graduate Studies in Mathematics, vol. 11, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0560-2, MR 0498740 p. 310
Wikipedia
Subsequence In mathematics, a subsequence of a given sequence is a sequence that can be derived from the given sequence by deleting some or no elements without changing the order of the remaining elements. For example, the sequence $\langle A,B,D\rangle $ is a subsequence of $\langle A,B,C,D,E,F\rangle $ obtained after removal of elements $C,$ $E,$ and $F.$ The relation of one sequence being the subsequence of another is a preorder. Subsequences can contain consecutive elements which were not consecutive in the original sequence. A subsequence which consists of a consecutive run of elements from the original sequence, such as $\langle B,C,D\rangle ,$ from $\langle A,B,C,D,E,F\rangle ,$ is a substring. The substring is a refinement of the subsequence. The list of all subsequences for the word "apple" would be "a", "ap", "al", "ae", "app", "apl", "ape", "ale", "appl", "appe", "aple", "apple", "p", "pp", "pl", "pe", "ppl", "ppe", "ple", "pple", "l", "le", "e", "" (empty string). Common subsequence Given two sequences $X$ and $Y,$ a sequence $Z$ is said to be a common subsequence of $X$ and $Y,$ if $Z$ is a subsequence of both $X$ and $Y.$ For example, if $X=\langle A,C,B,D,E,G,C,E,D,B,G\rangle \qquad {\text{ and}}$ $Y=\langle B,E,G,J,C,F,E,K,B\rangle \qquad {\text{ and}}$ $Z=\langle B,E,E\rangle .$ then $Z$ is said to be a common subsequence of $X$ and $Y.$ This would not be the longest common subsequence, since $Z$ only has length 3, and the common subsequence $\langle B,E,E,B\rangle $ has length 4. The longest common subsequence of $X$ and $Y$ is $\langle B,E,G,C,E,B\rangle .$ Applications Subsequences have applications to computer science,[1] especially in the discipline of bioinformatics, where computers are used to compare, analyze, and store DNA, RNA, and protein sequences. Take two sequences of DNA containing 37 elements, say: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAA The longest common subsequence of sequences 1 and 2 is: LCS(SEQ1,SEQ2) = CGTTCGGCTATGCTTCTACTTATTCTA This can be illustrated by highlighting the 27 elements of the longest common subsequence into the initial sequences: SEQ1 = ACGGTGTCGTGCTATGCTGATGCTGACTTATATGCTA SEQ2 = CGTTCGGCTATCGTACGTTCTATTCTATGATTTCTAA Another way to show this is to align the two sequences, that is, to position elements of the longest common subsequence in a same column (indicated by the vertical bar) and to introduce a special character (here, a dash) for padding of arisen empty subsequences: SEQ1 = ACGGTGTCGTGCTAT-G--C-TGATGCTGA--CT-T-ATATG-CTA-         | || ||| ||||| |  | |  | || |  || | || |  ||| SEQ2 = -C-GT-TCG-GCTATCGTACGT--T-CT-ATTCTATGAT-T-TCTAA Subsequences are used to determine how similar the two strands of DNA are, using the DNA bases: adenine, guanine, cytosine and thymine. Theorems • Every infinite sequence of real numbers has an infinite monotone subsequence (This is a lemma used in the proof of the Bolzano–Weierstrass theorem). • Every infinite bounded sequence in $\mathbb {R} ^{n}$ has a convergent subsequence (This is the Bolzano–Weierstrass theorem). • For all integers $r$ and $s,$ every finite sequence of length at least $(r-1)(s-1)+1$ contains a monotonically increasing subsequence of length $r$ or a monotonically decreasing subsequence of length $s$ (This is the Erdős–Szekeres theorem). • A metric space $(X,d)$ is compact if every sequence in $X$ has a convergent subsequence whose limit is in $X$. See also • Subsequential limit – The limit of some subsequence • Limit superior and limit inferior – Bounds of a sequencePages displaying short descriptions of redirect targets • Longest increasing subsequence problem – algorithm to find the longest increasing subsequence in an array of numbersPages displaying wikidata descriptions as a fallback Notes 1. In computer science, string is often used as a synonym for sequence, but it is important to note that substring and subsequence are not synonyms. Substrings are consecutive parts of a string, while subsequences need not be. This means that a substring of a string is always a subsequence of the string, but a subsequence of a string is not always a substring of the string, see: Gusfield, Dan (1999) [1997]. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. USA: Cambridge University Press. p. 4. ISBN 0-521-58519-8. This article incorporates material from subsequence on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Subsequential limit In mathematics, a subsequential limit of a sequence is the limit of some subsequence.[1] Every subsequential limit is a cluster point, but not conversely. In first-countable spaces, the two concepts coincide. In a topological space, if every subsequence has a subsequential limit to the same point, then the original sequence also converges to that limit. This need not hold in more generalized notions of convergence, such as the space of almost everywhere convergence. The supremum of the set of all subsequential limits of some sequence is called the limit superior, or limsup. Similarly, the infimum of such a set is called the limit inferior, or liminf. See limit superior and limit inferior.[1] If $(X,d)$ is a metric space and there is a Cauchy sequence such that there is a subsequence converging to some $x,$ then the sequence also converges to $x.$ See also • Convergent filter – Use of filters to describe and characterize all basic topological notions and results.Pages displaying short descriptions of redirect targets • List of limits • Limit of a sequence – Value to which tends an infinite sequence • Limit superior and limit inferior – Bounds of a sequencePages displaying short descriptions of redirect targets • Net (mathematics) – A generalization of a sequence of points • Filters in topology#Subordination analogs of results involving subsequences – Use of filters to describe and characterize all basic topological notions and results. References 1. Ross, Kenneth A. (3 March 1980). Elementary Analysis: The Theory of Calculus. Springer. ISBN 9780387904597. Retrieved 5 April 2023. Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications
Wikipedia
Axiom schema of specification In many popular versions of axiomatic set theory, the axiom schema of specification, also known as the axiom schema of separation, subset axiom scheme or axiom schema of restricted comprehension is an axiom schema. Essentially, it says that any definable subclass of a set is a set. "Axiom of separation" redirects here. For the separation axioms in topology, see separation axiom. Some mathematicians call it the axiom schema of comprehension, although others use that term for unrestricted comprehension, discussed below. Because restricting comprehension avoided Russell's paradox, several mathematicians including Zermelo, Fraenkel, and Gödel considered it the most important axiom of set theory.[1] Statement One instance of the schema is included for each formula φ in the language of set theory with free variables among x, w1, ..., wn, A. So B does not occur free in φ. In the formal language of set theory, the axiom schema is: $\forall w_{1},\ldots ,w_{n}\,\forall A\,\exists B\,\forall x\,(x\in B\Leftrightarrow [x\in A\land \varphi (x,w_{1},\ldots ,w_{n},A)])$ or in words: Given any set A, there is a set B (a subset of A) such that, given any set x, x is a member of B if and only if x is a member of A and φ holds for x. Note that there is one axiom for every such predicate φ; thus, this is an axiom schema. To understand this axiom schema, note that the set B must be a subset of A. Thus, what the axiom schema is really saying is that, given a set A and a predicate $\varphi $, we can find a subset B of A whose members are precisely the members of A that satisfy $\varphi $. By the axiom of extensionality this set is unique. We usually denote this set using set-builder notation as $B=\{x\in A|\varphi (x)\}$. Thus the essence of the axiom is: Every subclass of a set that is defined by a predicate is itself a set. The preceding form of separation was introduced in 1930 by Thoralf Skolem as a refinement of a previous form by Zermelo.[2] The axiom schema of specification is characteristic of systems of axiomatic set theory related to the usual set theory ZFC, but does not usually appear in radically different systems of alternative set theory. For example, New Foundations and positive set theory use different restrictions of the axiom of comprehension of naive set theory. The Alternative Set Theory of Vopenka makes a specific point of allowing proper subclasses of sets, called semisets. Even in systems related to ZFC, this scheme is sometimes restricted to formulas with bounded quantifiers, as in Kripke–Platek set theory with urelements. Relation to the axiom schema of replacement The axiom schema of separation can almost be derived from the axiom schema of replacement. First, recall this axiom schema: $\forall A\,\exists B\,\forall C\,(C\in B\iff \exists D\,[D\in A\land C=F(D)])$ for any functional predicate F in one variable that doesn't use the symbols A, B, C or D. Given a suitable predicate P for the axiom of specification, define the mapping F by F(D) = D if P(D) is true and F(D) = E if P(D) is false, where E is any member of A such that P(E) is true. Then the set B guaranteed by the axiom of replacement is precisely the set B required for the axiom of specification. The only problem is if no such E exists. But in this case, the set B required for the axiom of separation is the empty set, so the axiom of separation follows from the axiom of replacement together with the axiom of empty set. For this reason, the axiom schema of specification is often left out of modern lists of the Zermelo–Fraenkel axioms. However, it's still important for historical considerations, and for comparison with alternative axiomatizations of set theory, as can be seen for example in the following sections. Unrestricted comprehension The axiom schema of unrestricted comprehension reads: $\forall w_{1},\ldots ,w_{n}\,\exists B\,\forall x\,(x\in B\Leftrightarrow \varphi (x,w_{1},\ldots ,w_{n}))$ that is: There exists a set B whose members are precisely those objects that satisfy the predicate φ. This set B is again unique, and is usually denoted as {x : φ(x, w1, ..., wb)}. This axiom schema was tacitly used in the early days of naive set theory, before a strict axiomatization was adopted. Unfortunately, it leads directly to Russell's paradox by taking φ(x) to be ¬(x ∈ x) (i.e., the property that set x is not a member of itself). Therefore, no useful axiomatization of set theory can use unrestricted comprehension. Passing from classical logic to intuitionistic logic does not help, as the proof of Russell's paradox is intuitionistically valid. Accepting only the axiom schema of specification was the beginning of axiomatic set theory. Most of the other Zermelo–Fraenkel axioms (but not the axiom of extensionality, the axiom of regularity, or the axiom of choice) then became necessary to make up for some of what was lost by changing the axiom schema of comprehension to the axiom schema of specification – each of these axioms states that a certain set exists, and defines that set by giving a predicate for its members to satisfy, i.e. it is a special case of the axiom schema of comprehension. It is also possible to prevent the schema from being inconsistent by restricting which formulae it can be applied to, such as only stratified formulae in New Foundations (see below) or only positive formulae (formulae with only conjunction, disjunction, quantification and atomic formulae) in positive set theory. Positive formulae, however, typically are unable to express certain things that most theories can; for instance, there is no complement or relative complement in positive set theory. In NBG class theory In von Neumann–Bernays–Gödel set theory, a distinction is made between sets and classes. A class C is a set if and only if it belongs to some class E. In this theory, there is a theorem schema that reads $\exists D\forall C\,([C\in D]\iff [P(C)\land \exists E\,(C\in E)])\,,$ that is, There is a class D such that any class C is a member of D if and only if C is a set that satisfies P. provided that the quantifiers in the predicate P are restricted to sets. This theorem schema is itself a restricted form of comprehension, which avoids Russell's paradox because of the requirement that C be a set. Then specification for sets themselves can be written as a single axiom $\forall D\forall A\,(\exists E\,[A\in E]\implies \exists B\,[\exists E\,(B\in E)\land \forall C\,(C\in B\iff [C\in A\land C\in D])])\,,$ that is, Given any class D and any set A, there is a set B whose members are precisely those classes that are members of both A and D. or even more simply The intersection of a class D and a set A is itself a set B. In this axiom, the predicate P is replaced by the class D, which can be quantified over. Another simpler axiom which achieves the same effect is $\forall A\forall B\,([\exists E\,(A\in E)\land \forall C\,(C\in B\implies C\in A)]\implies \exists E\,[B\in E])\,,$ that is, A subclass of a set is a set. In higher-order settings In a typed language where we can quantify over predicates, the axiom schema of specification becomes a simple axiom. This is much the same trick as was used in the NBG axioms of the previous section, where the predicate was replaced by a class that was then quantified over. In second-order logic and higher-order logic with higher-order semantics, the axiom of specification is a logical validity and does not need to be explicitly included in a theory. In Quine's New Foundations In the New Foundations approach to set theory pioneered by W. V. O. Quine, the axiom of comprehension for a given predicate takes the unrestricted form, but the predicates that may be used in the schema are themselves restricted. The predicate (C is not in C) is forbidden, because the same symbol C appears on both sides of the membership symbol (and so at different "relative types"); thus, Russell's paradox is avoided. However, by taking P(C) to be (C = C), which is allowed, we can form a set of all sets. For details, see stratification. References • Crossley, J.bN.; Ash, C. J.; Brickhill, C. J.; Stillwell, J. C.; Williams, N. H. (1972). What is mathematical logic?. London-Oxford-New York: Oxford University Press. ISBN 0-19-888087-1. Zbl 0251.02001. • Halmos, Paul, Naive Set Theory. Princeton, New Jersey: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition). • Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. ISBN 3-540-44085-2. • Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. ISBN 0-444-86839-9. Citations 1. Heinz-Dieter Ebbinghaus (2007). Ernst Zermelo: An Approach to His Life and Work. Springer Science & Business Media. p. 88. ISBN 978-3-540-49553-6. 2. W. V. O. Quine, Mathematical Logic (1981), p.164. Harvard University Press, 0-674-55451-5 Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo
Wikipedia
Subspace identification method In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results. History SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker[1] showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function. In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares.[2] An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike.[3] A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of specific input-output measurements considering the impulse inputs.[4] It has been used for modal analysis of flexible structures, like bridges, space structures, etc. These methods have demonstrated to work in practice for resonant structures they did not work well for other type of systems and an input different from an impulse. A new impulse to the development of SID methods was made for operating directly on generic input-output data and avoiding to first explicitly compute the Markov parameters or estimating the samples of covariance functions prior to realizing the system matrices. Pioneers that contributed to these breakthroughs were Van Overschee and De Moor – introducing the N4SID approach,[5] Verhaegen – introducing the MOESP approach[6] and Larimore – presenting ST in the framework of Canonical Variate Analysis (CVA)[7] References 1. L. Kronecker, "Algebraische reduktion der schaaren bilinearer formen", S. B. Akad. Berlin, pp. 663–776, 1890. 2. M. Verhaegen, "Subspace Techniques in System Identification", in Encyclopedia of Systems and Control, https://link.springer.com/referenceworkentry/10.1007/978-1-4471-5102-9_107-1 3. H. Akaike, "A new look at the statistical model identification", IEEE Transactions on Automatic Control, vol. 19, pp. 716–723, 1974. 4. J.-N. Juang and R. S. Pappa, R. S., "An Eigensystem Realization Algorithm for modal parameter identification and model reduction", Journal of Guidance, Control, and Dynamics. vol. 8, 1985. 5. P. Van Overschee and B. De Moor, "N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems", Automatica, vol. 30 pp. 75–93, 1994. 6. M. Verhaegen, "Identification of the deterministic part of MIMO state space models given in innovations form from input-output data", Automatica, vol. 30, pp. 61–74, 1994. 7. W. Larimore, "Canonical variate analysis in identification, filtering, and adaptive control", in Proceedings of the 29th IEEE Conference on Decision and Control, 1990.
Wikipedia
Stochastic matrix In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability.[1][2]: 9–11  It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix.[2]: 9–11  The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics.[2]: 1–8  There are several different definitions and types of stochastic matrices:[2]: 9–11  A right stochastic matrix is a real square matrix, with each row summing to 1. A left stochastic matrix is a real square matrix, with each column summing to 1. A doubly stochastic matrix is a square matrix of nonnegative real numbers with each row and column summing to 1. For a matrix whose elements are stochastic, see Random matrix. In the same vein, one may define a stochastic vector (also called probability vector) as a vector whose elements are nonnegative real numbers which sum to 1. Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a stochastic vector.[2]: 9–11  A common convention in English language mathematics literature is to use row vectors of probabilities and right stochastic matrices rather than column vectors of probabilities and left stochastic matrices; this article follows that convention.[2]: 1–8  In addition, a substochastic matrix is a real square matrix whose row sums are all $\leq 1.$ History The stochastic matrix was developed alongside the Markov chain by Andrey Markov, a Russian mathematician and professor at St. Petersburg University who first published on the topic in 1906.[2]: 1–8 [3] His initial intended uses were for linguistic analysis and other mathematical subjects like card shuffling, but both Markov chains and matrices rapidly found use in other fields.[2]: 1–8 [3][4] Stochastic matrices were further developed by scholars like Andrey Kolmogorov, who expanded their possibilities by allowing for continuous-time Markov processes.[5] By the 1950s, articles using stochastic matrices had appeared in the fields of econometrics[6] and circuit theory.[7] In the 1960s, stochastic matrices appeared in an even wider variety of scientific works, from behavioral science[8] to geology[9][10] to residential planning.[11] In addition, much mathematical work was also done through these decades to improve the range of uses and functionality of the stochastic matrix and Markovian processes more generally. From the 1970s to present, stochastic matrices have found use in almost every field that requires formal analysis, from structural science[12] to medical diagnosis[13] to personnel management.[14] In addition, stochastic matrices have found wide use in land change modeling, usually under the term Markov matrix.[15] Definition and properties A stochastic matrix describes a Markov chain Xt over a finite state space S with cardinality α. If the probability of moving from i to j in one time step is Pr(j|i) = Pi,j, the stochastic matrix P is given by using Pi,j as the i-th row and j-th column element, e.g., $P=\left[{\begin{matrix}P_{1,1}&P_{1,2}&\dots &P_{1,j}&\dots &P_{1,\alpha }\\P_{2,1}&P_{2,2}&\dots &P_{2,j}&\dots &P_{2,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{i,1}&P_{i,2}&\dots &P_{i,j}&\dots &P_{i,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{\alpha ,1}&P_{\alpha ,2}&\dots &P_{\alpha ,j}&\dots &P_{\alpha ,\alpha }\\\end{matrix}}\right].$ Since the total of transition probability from a state i to all other states must be 1, $\sum _{j=1}^{\alpha }P_{i,j}=1;\,$ thus this matrix is a right stochastic matrix.[2]: 1–8  The above elementwise sum across each row i of P may be more concisely written as P1 = 1, where 1 is the α-dimensional column vector of all ones. Using this, it can be seen that the product of two right stochastic matrices P′ and P′′ is also right stochastic: P′ P′′ 1 = P′ (P′′ 1) = P′ 1 = 1. In general, the k-th power Pk of a right stochastic matrix P is also right stochastic. The probability of transitioning from i to j in two steps is then given by the (i, j)-th element of the square of P: $\left(P^{2}\right)_{i,j}.$ In general, the probability transition of going from any state to another state in a finite Markov chain given by the matrix P in k steps is given by Pk. An initial probability distribution of states, specifying where the system might be initially and with what probabilities, is given as a row vector. A stationary probability vector π is defined as a distribution, written as a row vector, that does not change under application of the transition matrix; that is, it is defined as a probability distribution on the set {1, …, n} which is also a row eigenvector of the probability matrix, associated with eigenvalue 1: ${\boldsymbol {\pi }}P={\boldsymbol {\pi }}.$ It can be shown that the spectral radius of any stochastic matrix is one. By the Gershgorin circle theorem, all of the eigenvalues of a stochastic matrix have absolute values less than or equal to one. Additionally, every right stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector 1, whose coordinates are all equal to 1 (just observe that multiplying a row of A times 1 equals the sum of the entries of the row and, hence, it equals 1). As left and right eigenvalues of a square matrix are the same, every stochastic matrix has, at least, a row eigenvector associated to the eigenvalue 1 and the largest absolute value of all its eigenvalues is also 1. Finally, the Brouwer Fixed Point Theorem (applied to the compact convex set of all probability distributions of the finite set {1, …, n}) implies that there is some left eigenvector which is also a stationary probability vector. On the other hand, the Perron–Frobenius theorem also ensures that every irreducible stochastic matrix has such a stationary vector, and that the largest absolute value of an eigenvalue is always 1. However, this theorem cannot be applied directly to such matrices because they need not be irreducible. In general, there may be several such vectors. However, for a matrix with strictly positive entries (or, more generally, for an irreducible aperiodic stochastic matrix), this vector is unique and can be computed by observing that for any i we have the following limit, $\lim _{k\rightarrow \infty }\left(P^{k}\right)_{i,j}={\boldsymbol {\pi }}_{j},$ where πj is the j-th element of the row vector π. Among other things, this says that the long-term probability of being in a state j is independent of the initial state i. That both of these computations give the same stationary vector is a form of an ergodic theorem, which is generally true in a wide variety of dissipative dynamical systems: the system evolves, over time, to a stationary state. Intuitively, a stochastic matrix represents a Markov chain; the application of the stochastic matrix to a probability distribution redistributes the probability mass of the original distribution while preserving its total mass. If this process is applied repeatedly, the distribution converges to a stationary distribution for the Markov chain.[2]: 55–59  Example: the cat and mouse Suppose there is a timer and a row of five adjacent boxes. At time zero, a cat is in the first box, and a mouse is in the fifth box. The cat and the mouse both jump to a random adjacent box when the timer advances. E.g. if the cat is in the second box and the mouse is in the fourth, the probability that the cat will be in the first box and the mouse in the fifth after the timer advances is one fourth. If the cat is in the first box and the mouse is in the fifth, the probability that the cat will be in box two and the mouse will be in box four after the timer advances is one. The cat eats the mouse if both end up in the same box, at which time the game ends. Let the random variable K be the time the mouse stays in the game. The Markov chain that represents this game contains the following five states specified by the combination of positions (cat,mouse). Note that while a naive enumeration of states would list 25 states, many are impossible either because the mouse can never have a lower index than the cat (as that would mean the mouse occupied the cat's box and survived to move past it), or because the sum of the two indices will always have even parity. In addition, the 3 possible states that lead to the mouse's death are combined into one: • State 1: (1,3) • State 2: (1,5) • State 3: (2,4) • State 4: (3,5) • State 5: game over: (2,2), (3,3) & (4,4). We use a stochastic matrix, $P$ (below), to represent the transition probabilities of this system (rows and columns in this matrix are indexed by the possible states listed above, with the pre-transition state as the row and post-transition state as the column).[2]: 1–8  For instance, starting from state 1 – 1st row – it is impossible for the system to stay in this state, so $P_{11}=0$; the system also cannot transition to state 2 – because the cat would have stayed in the same box – so $P_{12}=0$, and by a similar argument for the mouse, $P_{14}=0$. Transitions to states 3 or 5 are allowed, and thus $P_{13},P_{15}\neq 0$ . $P={\begin{bmatrix}0&0&1/2&0&1/2\\0&0&1&0&0\\1/4&1/4&0&1/4&1/4\\0&0&1/2&0&1/2\\0&0&0&0&1\end{bmatrix}}.$ Long-term averages No matter what the initial state, the cat will eventually catch the mouse (with probability 1) and a stationary state π = (0,0,0,0,1) is approached as a limit.[2]: 55–59  To compute the long-term average or expected value of a stochastic variable $Y$, for each state $S_{j}$ and time $t_{k}$ there is a contribution of $Y_{j,k}\cdot P(S=S_{j},t=t_{k})$. Survival can be treated as a binary variable with $Y=1$ for a surviving state and $Y=0$ for the terminated state. The states with $Y=0$ do not contribute to the long-term average. Phase-type representation As State 5 is an absorbing state, the distribution of time to absorption is discrete phase-type distributed. Suppose the system starts in state 2, represented by the vector $[0,1,0,0,0]$. The states where the mouse has perished don't contribute to the survival average so state five can be ignored. The initial state and transition matrix can be reduced to, ${\boldsymbol {\tau }}=[0,1,0,0],\qquad T={\begin{bmatrix}0&0&{\frac {1}{2}}&0\\0&0&1&0\\{\frac {1}{4}}&{\frac {1}{4}}&0&{\frac {1}{4}}\\0&0&{\frac {1}{2}}&0\end{bmatrix}},$ and $(I-T)^{-1}{\boldsymbol {1}}={\begin{bmatrix}2.75\\4.5\\3.5\\2.75\end{bmatrix}},$ where $I$ is the identity matrix, and $\mathbf {1} $ represents a column matrix of all ones that acts as a sum over states. Since each state is occupied for one step of time the expected time of the mouse's survival is just the sum of the probability of occupation over all surviving states and steps in time, $E[K]={\boldsymbol {\tau }}\left(I+T+T^{2}+\cdots \right){\boldsymbol {1}}={\boldsymbol {\tau }}(I-T)^{-1}{\boldsymbol {1}}=4.5.$ Higher order moments are given by $E[K(K-1)\dots (K-n+1)]=n!{\boldsymbol {\tau }}(I-{T})^{-n}{T}^{n-1}\mathbf {1} \,.$ See also • Density matrix • Markov kernel, the equivalent of a stochastic matrix over a continuous state space • Matrix difference equation • Models of DNA evolution • Muirhead's inequality • Probabilistic automaton • Transition rate matrix, used to generalize the stochastic matrix to continuous time References 1. Asmussen, S. R. (2003). "Markov Chains". Applied Probability and Queues. Stochastic Modelling and Applied Probability. Vol. 51. pp. 3–8. doi:10.1007/0-387-21525-5_1. ISBN 978-0-387-00211-8. 2. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 9–11. ISBN 978-1-119-38755-8. 3. Hayes, Brian (2013). "First links in the Markov chain". American Scientist. 101 (2): 92–96. doi:10.1511/2013.101.92. 4. Charles Miller Grinstead; James Laurie Snell (1997). Introduction to Probability. American Mathematical Soc. pp. 464–466. ISBN 978-0-8218-0749-1. 5. Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich Kolmogorov (1903–1987)". Bulletin of the London Mathematical Society. 22 (1): 33. doi:10.1112/blms/22.1.31. 6. Solow, Robert (1 January 1952). "On the Structure of Linear Models". Econometrica. 20 (1): 29–46. doi:10.2307/1907805. JSTOR 1907805. 7. Sittler, R. (1 December 1956). "Systems Analysis of Discrete Markov Processes". IRE Transactions on Circuit Theory. 3 (4): 257–266. doi:10.1109/TCT.1956.1086324. ISSN 0096-2007. 8. Evans, Selby (1 July 1967). "Vargus 7: Computed patterns from markov processes". Behavioral Science. 12 (4): 323–328. doi:10.1002/bs.3830120407. ISSN 1099-1743. 9. Gingerich, P. D. (1 January 1969). "Markov analysis of cyclic alluvial sediments". Journal of Sedimentary Research. 39 (1): 330–332. Bibcode:1969JSedR..39..330G. doi:10.1306/74d71c4e-2b21-11d7-8648000102c1865d. ISSN 1527-1404. 10. Krumbein, W. C.; Dacey, Michael F. (1 March 1969). "Markov chains and embedded Markov chains in geology". Journal of the International Association for Mathematical Geology. 1 (1): 79–96. doi:10.1007/BF02047072. ISSN 0020-5958. 11. Wolfe, Harry B. (1 May 1967). "Models for Conditioning Aging of Residential Structures". Journal of the American Institute of Planners. 33 (3): 192–196. doi:10.1080/01944366708977915. ISSN 0002-8991. 12. Krenk, S. (November 1989). "A Markov matrix for fatigue load simulation and rainflow range evaluation". Structural Safety. 6 (2–4): 247–258. doi:10.1016/0167-4730(89)90025-8. 13. Beck, J.Robert; Pauker, Stephen G. (1 December 1983). "The Markov Process in Medical Prognosis". Medical Decision Making. 3 (4): 419–458. doi:10.1177/0272989X8300300403. ISSN 0272-989X. PMID 6668990. 14. Gotz, Glenn A.; McCall, John J. (1 March 1983). "Sequential Analysis of the Stay/Leave Decision: U.S. Air Force Officers". Management Science. 29 (3): 335–351. doi:10.1287/mnsc.29.3.335. ISSN 0025-1909. 15. Kamusoko, Courage; Aniya, Masamu; Adi, Bongo; Manjoro, Munyaradzi (1 July 2009). "Rural sustainability under threat in Zimbabwe – Simulation of future land use/cover changes in the Bindura district based on the Markov-cellular automata model". Applied Geography. 29 (3): 435–447. doi:10.1016/j.apgeog.2008.10.002. Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices Authority control: National • France • BnF data • Israel • United States
Wikipedia
Substructural logic In logic, a substructural logic is a logic lacking one of the usual structural rules (e.g. of classical and intuitionistic logic), such as weakening, contraction, exchange or associativity. Two of the more significant substructural logics are relevance logic and linear logic. Examples In a sequent calculus, one writes each line of a proof as $\Gamma \vdash \Sigma $. Here the structural rules are rules for rewriting the LHS of the sequent, denoted Γ, initially conceived of as a string (sequence) of propositions. The standard interpretation of this string is as conjunction: we expect to read ${\mathcal {A}},{\mathcal {B}}\vdash {\mathcal {C}}$ as the sequent notation for (A and B) implies C. Here we are taking the RHS Σ to be a single proposition C (which is the intuitionistic style of sequent); but everything applies equally to the general case, since all the manipulations are taking place to the left of the turnstile symbol $\vdash $. Since conjunction is a commutative and associative operation, the formal setting-up of sequent theory normally includes structural rules for rewriting the sequent Γ accordingly—for example for deducing ${\mathcal {B}},{\mathcal {A}}\vdash {\mathcal {C}}$ from ${\mathcal {A}},{\mathcal {B}}\vdash {\mathcal {C}}$. There are further structural rules corresponding to the idempotent and monotonic properties of conjunction: from $\Gamma ,{\mathcal {A}},{\mathcal {A}},\Delta \vdash {\mathcal {C}}$ we can deduce $\Gamma ,{\mathcal {A}},\Delta \vdash {\mathcal {C}}$. Also from $\Gamma ,{\mathcal {A}},\Delta \vdash {\mathcal {C}}$ one can deduce, for any B, $\Gamma ,{\mathcal {A}},{\mathcal {B}},\Delta \vdash {\mathcal {C}}$. Linear logic, in which duplicated hypotheses 'count' differently from single occurrences, leaves out both of these rules, while relevant (or relevance) logics merely leaves out the latter rule, on the ground that B is clearly irrelevant to the conclusion. The above are basic examples of structural rules. It is not that these rules are contentious, when applied in conventional propositional calculus. They occur naturally in proof theory, and were first noticed there (before receiving a name). Premise composition There are numerous ways to compose premises (and in the multiple-conclusion case, conclusions as well). One way is to collect them into a set. But since e.g. {a,a} = {a} we have contraction for free if premises are sets. We also have associativity and permutation (or commutativity) for free as well, among other properties. In substructural logics, typically premises are not composed into sets, but rather they are composed into more fine-grained structures, such as trees or multisets (sets that distinguish multiple occurrences of elements) or sequences of formulae. For example, in linear logic, since contraction fails, the premises must be composed in something at least as fine-grained as multisets. History Substructural logics are a relatively young field. The first conference on the topic was held in October 1990 in Tübingen, as "Logics with Restricted Structural Rules". During the conference, Kosta Došen proposed the term "substructural logics", which is now in use today. See also • Substructural type system • Residuated lattice Notes References • F. Paoli (2002), Substructural Logics: A Primer, Kluwer. • G. Restall (2000) An Introduction to Substructural Logics, Routledge. Further reading • Galatos, Nikolaos, Peter Jipsen, Tomasz Kowalski, and Hiroakira Ono (2007), Residuated Lattices. An Algebraic Glimpse at Substructural Logics, Elsevier, ISBN 978-0-444-52141-5. External links • Media related to Substructural logic at Wikimedia Commons • Restall, Greg. "Substructural logics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Non-classical logic Intuitionistic • Intuitionistic logic • Constructive analysis • Heyting arithmetic • Intuitionistic type theory • Constructive set theory Fuzzy • Degree of truth • Fuzzy rule • Fuzzy set • Fuzzy finite element • Fuzzy set operations Substructural • Structural rule • Relevance logic • Linear logic Paraconsistent • Dialetheism Description • Ontology (computer science) • Ontology language Many-valued • Three-valued • Four-valued • Łukasiewicz Digital logic • Three-state logic • Tri-state buffer • Four-valued • Verilog • IEEE 1164 • VHDL Others • Dynamic semantics • Inquisitive logic • Intermediate logic • Modal logic • Nonmonotonic logic
Wikipedia
Subgraph isomorphism problem In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G and H are given as input, and one must determine whether G contains a subgraph that is isomorphic to H. Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete.[1] However certain other cases of subgraph isomorphism may be solved in polynomial time.[2] Sometimes the name subgraph matching is also used for the same problem. This name puts emphasis on finding such a subgraph as opposed to the bare decision problem. Decision problem and computational complexity To prove subgraph isomorphism is NP-complete, it must be formulated as a decision problem. The input to the decision problem is a pair of graphs G and H. The answer to the problem is positive if H is isomorphic to a subgraph of G, and negative otherwise. Formal question: Let $G=(V,E)$, $H=(V^{\prime },E^{\prime })$ be graphs. Is there a subgraph $G_{0}=(V_{0},E_{0})\mid V_{0}\subseteq V,E_{0}\subseteq E\cap (V_{0}\times V_{0})$ such that $G_{0}\cong H$? I. e., does there exist a bijection $f\colon V_{0}\rightarrow V^{\prime }$ such that $\{\,v_{1},v_{2}\,\}\in E_{0}\iff \{\,f(v_{1}),f(v_{2})\,\}\in E^{\prime }$? The proof of subgraph isomorphism being NP-complete is simple and based on reduction of the clique problem, an NP-complete decision problem in which the input is a single graph G and a number k, and the question is whether G contains a complete subgraph with k vertices. To translate this to a subgraph isomorphism problem, simply let H be the complete graph Kk; then the answer to the subgraph isomorphism problem for G and H is equal to the answer to the clique problem for G and k. Since the clique problem is NP-complete, this polynomial-time many-one reduction shows that subgraph isomorphism is also NP-complete.[3] An alternative reduction from the Hamiltonian cycle problem translates a graph G which is to be tested for Hamiltonicity into the pair of graphs G and H, where H is a cycle having the same number of vertices as G. Because the Hamiltonian cycle problem is NP-complete even for planar graphs, this shows that subgraph isomorphism remains NP-complete even in the planar case.[4] Subgraph isomorphism is a generalization of the graph isomorphism problem, which asks whether G is isomorphic to H: the answer to the graph isomorphism problem is true if and only if G and H both have the same numbers of vertices and edges and the subgraph isomorphism problem for G and H is true. However the complexity-theoretic status of graph isomorphism remains an open question. In the context of the Aanderaa–Karp–Rosenberg conjecture on the query complexity of monotone graph properties, Gröger (1992) showed that any subgraph isomorphism problem has query complexity Ω(n3/2); that is, solving the subgraph isomorphism requires an algorithm to check the presence or absence in the input of Ω(n3/2) different edges in the graph.[5] Algorithms Ullmann (1976) describes a recursive backtracking procedure for solving the subgraph isomorphism problem. Although its running time is, in general, exponential, it takes polynomial time for any fixed choice of H (with a polynomial that depends on the choice of H). When G is a planar graph (or more generally a graph of bounded expansion) and H is fixed, the running time of subgraph isomorphism can be reduced to linear time.[2] Ullmann (2010) is a substantial update to the 1976 subgraph isomorphism algorithm paper. Cordella (2004) proposed in 2004 another algorithm based on Ullmann's, VF2, which improves the refinement process using different heuristics and uses significantly less memory. Bonnici (2013) harvtxt error: no target: CITEREFBonnici2013 (help) proposed a better algorithm, which improves the initial order of the vertices using some heuristics. The current state of the art solver for moderately-sized, hard instances is the Glasgow Subgraph Solver (McCreesh (2020) harvtxt error: no target: CITEREFMcCreesh2020 (help)).[6] This solver adopts a constraint programming approach, using bit-parallel data structures and specialized propagation algorithms for performance. It supports most common variations of the problem and is capable of counting or enumerating solutions as well as deciding whether one exists. For large graphs, state-of-the art algorithms include CFL-Match and Turboiso, and extensions thereupon such as DAF by Han (2019) harvtxt error: no target: CITEREFHan2019 (help). Applications As subgraph isomorphism has been applied in the area of cheminformatics to find similarities between chemical compounds from their structural formula; often in this area the term substructure search is used.[7] A query structure is often defined graphically using a structure editor program; SMILES based database systems typically define queries using SMARTS, a SMILES extension. The closely related problem of counting the number of isomorphic copies of a graph H in a larger graph G has been applied to pattern discovery in databases,[8] the bioinformatics of protein-protein interaction networks,[9] and in exponential random graph methods for mathematically modeling social networks.[10] Ohlrich et al. (1993) describe an application of subgraph isomorphism in the computer-aided design of electronic circuits. Subgraph matching is also a substep in graph rewriting (the most runtime-intensive), and thus offered by graph rewrite tools. The problem is also of interest in artificial intelligence, where it is considered part of an array of pattern matching in graphs problems; an extension of subgraph isomorphism known as graph mining is also of interest in that area.[11] See also • Frequent subtree mining • Induced subgraph isomorphism problem • Maximum common edge subgraph problem • Maximum common subgraph isomorphism problem Notes 1. The original Cook (1971) paper that proves the Cook–Levin theorem already showed subgraph isomorphism to be NP-complete, using a reduction from 3-SAT involving cliques. 2. Eppstein (1999); Nešetřil & Ossona de Mendez (2012) 3. Wegener, Ingo (2005), Complexity Theory: Exploring the Limits of Efficient Algorithms, Springer, p. 81, ISBN 9783540210450. 4. de la Higuera, Colin; Janodet, Jean-Christophe; Samuel, Émilie; Damiand, Guillaume; Solnon, Christine (2013), "Polynomial algorithms for open plane graph and subgraph isomorphisms" (PDF), Theoretical Computer Science, 498: 76–99, doi:10.1016/j.tcs.2013.05.026, MR 3083515, It is known since the mid-70's that the isomorphism problem is solvable in polynomial time for plane graphs. However, it has also been noted that the subisomorphism problem is still N P-complete, in particular because the Hamiltonian cycle problem is NP-complete for planar graphs. 5. Here Ω invokes Big Omega notation. 6. For an experimental evaluation, see Solnon (2019). 7. Ullmann (1976) 8. Kuramochi & Karypis (2001). 9. Pržulj, Corneil & Jurisica (2006). 10. Snijders et al. (2006). 11. http://www.aaai.org/Papers/Symposia/Fall/2006/FS-06-02/FS06-02-007.pdf; expanded version at https://e-reports-ext.llnl.gov/pdf/332302.pdf References • Cook, S. A. (1971), "The complexity of theorem-proving procedures", Proc. 3rd ACM Symposium on Theory of Computing, pp. 151–158, doi:10.1145/800157.805047, S2CID 7573663. • Eppstein, David (1999), "Subgraph isomorphism in planar graphs and related problems" (PDF), Journal of Graph Algorithms and Applications, 3 (3): 1–27, arXiv:cs.DS/9911003, doi:10.7155/jgaa.00014, S2CID 2303110. • Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman, ISBN 978-0-7167-1045-5. A1.4: GT48, pg.202. • Gröger, Hans Dietmar (1992), "On the randomized complexity of monotone graph properties" (PDF), Acta Cybernetica, 10 (3): 119–127. • Han, Myoungji; Kim, Hyunjoon; Gu, Geonmo; Park, Kunsoo; Han, Wookshin (2019), "Efficient Subgraph Matching: Harmonizing Dynamic Programming, Adaptive Matching Order, and Failing Set Together", SIGMOD, doi:10.1145/3299869.3319880, S2CID 195259296 • Kuramochi, Michihiro; Karypis, George (2001), "Frequent subgraph discovery", 1st IEEE International Conference on Data Mining, p. 313, CiteSeerX 10.1.1.22.4992, doi:10.1109/ICDM.2001.989534, ISBN 978-0-7695-1119-1, S2CID 8684662. • Ohlrich, Miles; Ebeling, Carl; Ginting, Eka; Sather, Lisa (1993), "SubGemini: identifying subcircuits using a fast subgraph isomorphism algorithm", Proceedings of the 30th international Design Automation Conference, pp. 31–37, doi:10.1145/157485.164556, ISBN 978-0-89791-577-9, S2CID 5889119. • Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "18.3 The subgraph isomorphism problem and Boolean queries", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Springer, pp. 400–401, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058. • Pržulj, N.; Corneil, D. G.; Jurisica, I. (2006), "Efficient estimation of graphlet frequency distributions in protein–protein interaction networks", Bioinformatics, 22 (8): 974–980, doi:10.1093/bioinformatics/btl030, PMID 16452112. • Snijders, T. A. B.; Pattison, P. E.; Robins, G.; Handcock, M. S. (2006), "New specifications for exponential random graph models", Sociological Methodology, 36 (1): 99–153, CiteSeerX 10.1.1.62.7975, doi:10.1111/j.1467-9531.2006.00176.x, S2CID 10800726. • Ullmann, Julian R. (1976), "An algorithm for subgraph isomorphism", Journal of the ACM, 23 (1): 31–42, doi:10.1145/321921.321925, S2CID 17268751. • Jamil, Hasan (2011), "Computing Subgraph Isomorphic Queries using Structural Unification and Minimum Graph Structures", 26th ACM Symposium on Applied Computing, pp. 1058–1063. • Ullmann, Julian R. (2010), "Bit-vector algorithms for binary constraint satisfaction and subgraph isomorphism", Journal of Experimental Algorithmics, 15: 1.1, CiteSeerX 10.1.1.681.8766, doi:10.1145/1671970.1921702, S2CID 15021184. • Cordella, Luigi P. (2004), "A (sub) graph isomorphism algorithm for matching large graphs", IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (10): 1367–1372, CiteSeerX 10.1.1.101.5342, doi:10.1109/tpami.2004.75, PMID 15641723, S2CID 833657 • Bonnici, V.; Giugno, R. (2013), "A subgraph isomorphism algorithm and its application to biochemical data", BMC Bioinformatics, 14(Suppl7) (13): S13, doi:10.1186/1471-2105-14-s7-s13, PMC 3633016, PMID 23815292 • Carletti, V.; Foggia, P.; Saggese, A.; Vento, M. (2018), "Challenging the time complexity of exact subgraph isomorphism for huge and dense graphs with VF3", IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (4): 804–818, doi:10.1109/TPAMI.2017.2696940, PMID 28436848, S2CID 3709576 • Solnon, Christine (2019), "Experimental Evaluation of Subgraph Isomorphism Solvers" (PDF), Graph-Based Representations in Pattern Recognition - 12th IAPR-TC-15 International Workshop, GbRPR 2019, Tours, France, June 19-21, 2019, Proceedings, Lecture Notes in Computer Science, vol. 11510, Springer, pp. 1–13, doi:10.1007/978-3-030-20081-7_1, ISBN 978-3-030-20080-0, S2CID 128270779 • McCreesh, Ciaran; Prosser, Patrick; Trimble, James (2020), "The Glasgow Subgraph Solver: Using Constraint Programming to Tackle Hard Subgraph Isomorphism Problem Variants", Graph Transformation - 13th International Conference, ICGT 2020, Held as Part of STAF 2020, Bergen, Norway, June 25-26, 2020, Proceedings, Lecture Notes in Computer Science, vol. 12150, Springer, pp. 316–324, doi:10.1007/978-3-030-51372-6_19, ISBN 978-3-030-51371-9
Wikipedia
Subsumption lattice A subsumption lattice is a mathematical structure used in the theoretical background of automated theorem proving and other symbolic computation applications. Definition A term t1 is said to subsume a term t2 if a substitution σ exists such that σ applied to t1 yields t2. In this case, t1 is also called more general than t2, and t2 is called more specific than t1, or an instance of t1. The set of all (first-order) terms over a given signature can be made a lattice over the partial ordering relation "... is more specific than ..." as follows: • consider two terms equal if they differ only in their variable naming,[1] • add an artificial minimal element Ω (the overspecified term), which is considered to be more specific than any other term. This lattice is called the subsumption lattice. Two terms are said to be unifiable if their meet differs from Ω. Properties The join and the meet operation in this lattice are called anti-unification and unification, respectively. A variable x and the artificial element Ω are the top and the bottom element of the lattice, respectively. Each ground term, i.e. each term without variables, is an atom of the lattice. The lattice has infinite descending chains, e.g. x, g(x), g(g(x)), g(g(g(x))), ..., but no infinite ascending ones. If f is a binary function symbol, g is a unary function symbol, and x and y denote variables, then the terms f(x,y), f(g(x),y), f(g(x),g(y)), f(x,x), and f(g(x),g(x)) form the minimal non-modular lattice N5 (see Pic. 1); its appearance prevents the subsumption lattice from being modular and hence also from being distributive. The set of terms unifiable with a given term need not be closed with respect to meet; Pic. 2 shows a counter-example. Denoting by Gnd(t) the set of all ground instances of a term t, the following properties hold:[2] • t equals the join of all members of Gnd(t), modulo renaming, • t1 is an instance of t2 if and only if Gnd(t1) ⊆ Gnd(t2), • terms with the same set of ground instances are equal modulo renaming, • if t is the meet of t1 and t2, then Gnd(t) = Gnd(t1) ∩ Gnd(t2), • if t is the join of t1 and t2, then Gnd(t) ⊇ Gnd(t1) ∪ Gnd(t2). 'Sublattice' of linear terms The set of linear terms, that is of terms without multiple occurrences of a variable, is a sub-poset of the subsumption lattice, and is itself a lattice. This lattice, too, includes N5 and the minimal non-distributive lattice M3 as sublattices (see Pic. 3 and Pic. 4) and is hence not modular, let alone distributive. The meet operation yields always the same result in the lattice of all terms as in the lattice of linear terms. The join operation in the all terms lattice yields always an instance of the join in the linear terms lattice; for example, the (ground) terms f(a,a) and f(b,b) have the join f(x,x) and f(x,y) in the all terms lattice and in the linear terms lattice, respectively. As the join operations do not in general agree, the linear terms lattice is not properly speaking a sublattice of the all terms lattice. Join and meet of two proper[3] linear terms, i.e. their anti-unification and unification, corresponds to intersection and union of their path sets, respectively. Therefore, every sublattice of the lattice of linear terms that does not contain Ω is isomorphic to a set lattice, and hence distributive (see Pic. 5). Origin Apparently, the subsumption lattice was first investigated by Gordon D. Plotkin, in 1970.[4] References 1. formally: factorize the set of all terms by the equivalence relation "... is a renaming of ..."; for example, the term f(x,y) is a renaming of f(y,x), but not of f(x,x) 2. Reynolds, John C. (1970). Meltzer, B.; Michie, D. (eds.). "Transformational Systems and the Algebraic Structure of Atomic Formulas" (PDF). Machine Intelligence. Edinburgh University Press. 5: 135–151. 3. i.e. different from Ω 4. Plotkin, Gordon D. (Jun 1970). Lattice Theoretic Properties of Subsumption. Edinburgh: Univ., Dept. of Machine Intell. and Perception.
Wikipedia
Subtangent In geometry, the subtangent and related terms are certain line segments defined using the line tangent to a curve at a given point and the coordinate axes. The terms are somewhat archaic today but were in common use until the early part of the 20th century. Definitions Let P = (x, y) be a point on a given curve with A = (x, 0) its projection onto the x-axis. Draw the tangent to the curve at P and let T be the point where this line intersects the x-axis. Then TA is defined to be the subtangent at P. Similarly, if normal to the curve at P intersects the x-axis at N then AN is called the subnormal. In this context, the lengths PT and PN are called the tangent and normal, not to be confused with the tangent line and the normal line which are also called the tangent and normal. Equations Let φ be the angle of inclination of the tangent with respect to the x-axis; this is also known as the tangential angle. Then $\tan \varphi ={\frac {dy}{dx}}={\frac {AP}{TA}}={\frac {AN}{AP}}.$ So the subtangent is $y\cot \varphi ={\frac {y}{\tfrac {dy}{dx}}},$ and the subnormal is $y\tan \varphi =y{\frac {dy}{dx}}.$ The normal is given by $y\sec \varphi =y{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}},$ and the tangent is given by $y\csc \varphi ={\frac {y}{\tfrac {dy}{dx}}}{\sqrt {1+\left({\frac {dy}{dx}}\right)^{2}}}.$ Polar definitions Let P = (r, θ) be a point on a given curve defined by polar coordinates and let O denote the origin. Draw a line through O which is perpendicular to OP and let T now be the point where this line intersects the tangent to the curve at P. Similarly, let N now be the point where the normal to the curve intersects the line. Then OT and ON are, respectively, called the polar subtangent and polar subnormal of the curve at P. Polar equations Let ψ be the angle between the tangent and the ray OP; this is also known as the polar tangential angle. Then $\tan \psi ={\frac {r}{\tfrac {dr}{d\theta }}}={\frac {OP}{ON}}={\frac {OT}{OP}}.$ So the polar subtangent is $r\tan \psi ={\frac {r^{2}}{\tfrac {dr}{d\theta }}},$ and the subnormal is $r\cot \psi ={\frac {dr}{d\theta }}.$ References • J. Edwards (1892). Differential Calculus. London: MacMillan and Co. pp. 150, 154. • B. Williamson "Subtangent and Subnormal" and "Polar Subtangent and Polar Subnormal" in An elementary treatise on the differential calculus (1899) p 215, 223 Internet Archive
Wikipedia
Subtended angle In geometry, an angle is subtended by an arc, line segment or any other section of a curve when its two rays pass through the endpoints of that arc, line segment or curve section. Conversely, the arc, line segment or curve section confined within the rays of an angle is regarded as the corresponding subtension of that angle. It is also sometimes said that an arc is intercepted or enclosed by that angle. Look up subtend in Wiktionary, the free dictionary. The precise meaning varies with context. For example, one may speak of the angle subtended by an arc of a circle when the angle's vertex is the centre of the circle. See also • Central angle • Inscribed angle External links • Definition of subtended angle, mathisfun.com, with interactive applet • How an object subtends an angle, Math Open Reference, with interactive applet • Angle definition pages, Math Open Reference, with interactive applets that are also useful in a classroom setting.
Wikipedia
Subterminal object In category theory, a branch of mathematics, a subterminal object is an object X of a category C with the property that every object of C has at most one morphism into X.[1] If X is subterminal, then the pair of identity morphisms (1X, 1X) makes X into the product of X and X. If C has a terminal object 1, then an object X is subterminal if and only if it is a subobject of 1, hence the name.[2] The category of categories with subterminal objects and functors preserving them is not accessible.[3] References 1. Pitt, David; Rydeheard, David E.; Johnstone, Peter (12 September 1995). Category Theory and Computer Science: 6th International Conference, CTCS '95, Cambridge, United Kingdom, August 7 - 11, 1995. Proceedings. Springer. Retrieved 18 February 2017. 2. Ong, Luke (10 March 2010). Foundations of Software Science and Computational Structures: 13th International Conference, FOSSACS 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Paphos, Cyprus, March 20-28, 2010, Proceedings. Springer. ISBN 9783642120329. Retrieved 18 February 2017. 3. Barr, Michael; Wells, Charles (September 1992). "On the limitations of sketches". Canadian Mathematical Bulletin. Canadian Mathematical Society. 35 (3): 287–294. doi:10.4153/CMB-1992-040-7. External links • Subterminal object at the nLab
Wikipedia
Subtle cardinal In mathematics, subtle cardinals and ethereal cardinals are closely related kinds of large cardinal number. A cardinal κ is called subtle if for every closed and unbounded C ⊂ κ and for every sequence A of length κ for which element number δ (for an arbitrary δ), Aδ ⊂ δ, there exist α, β, belonging to C, with α < β, such that Aα = Aβ ∩ α. A cardinal κ is called ethereal if for every closed and unbounded C ⊂ κ and for every sequence A of length κ for which element number δ (for an arbitrary δ), Aδ ⊂ δ and Aδ has the same cardinal as δ, there exist α, β, belonging to C, with α < β, such that card(α) = card(Aβ ∩ Aα). Subtle cardinals were introduced by Jensen & Kunen (1969). Ethereal cardinals were introduced by Ketonen (1974). Any subtle cardinal is ethereal, and any strongly inaccessible ethereal cardinal is subtle. Theorem There is a subtle cardinal ≤ κ if and only if every transitive set S of cardinality κ contains x and y such that x is a proper subset of y and x ≠ Ø and x ≠ {Ø}. An infinite ordinal κ is subtle if and only if for every λ < κ, every transitive set S of cardinality κ includes a chain (under inclusion) of order type λ. See also • List of large cardinal properties References • Friedman, Harvey (2001), "Subtle Cardinals and Linear Orderings", Annals of Pure and Applied Logic, 107 (1–3): 1–34, doi:10.1016/S0168-0072(00)00019-1 • Jensen, R. B.; Kunen, K. (1969), Some Combinatorial Properties of L and V, Unpublished manuscript • Ketonen, Jussi (1974), "Some combinatorial principles", Transactions of the American Mathematical Society, Transactions of the American Mathematical Society, Vol. 188, 188: 387–394, doi:10.2307/1996785, ISSN 0002-9947, JSTOR 1996785, MR 0332481
Wikipedia
Subtract a square Subtract-a-square (also referred to as take-a-square) is a two-player mathematical subtraction game. It is played by two people with a pile of coins (or other tokens) between them. The players take turns removing coins from the pile, always removing a non-zero square number of coins. The game is usually played as a normal play game, which means that the player who removes the last coin wins.[1][2] It is an impartial game, meaning that the set of moves available from any position does not depend on whose turn it is. Solomon W. Golomb credits the invention of this game to Richard A. Epstein.[3] Example A normal play game starting with 13 coins is a win for the first player provided they start with a subtraction of 1: player 1: 13 - 1*1 = 12 Player 2 now has three choices: subtract 1, 4 or 9. In each of these cases, player 1 can ensure that within a few moves the number 2 gets passed on to player 2: player 2: 12 - 1*1 = 11 player 2: 12 - 2*2 = 8 player 2: 12 - 3*3 = 3 player 1: 11 - 3*3 = 2 player 1: 8 - 1*1 = 7 player 1: 3 - 1*1 = 2 player 2: 7 - 1*1 = 6 or: 7 - 2*2 = 3 player 1: 6 - 2*2 = 2 3 - 1*1 = 2 Now player 2 has to subtract 1, and player 1 subsequently does the same: player 2: 2 - 1*1 = 1 player 1: 1 - 1*1 = 0 player 2 loses Mathematical theory In the above example, the number '13' represents a winning or 'hot' position, whilst the number '2' represents a losing or 'cold' position. Given an integer list with each integer labeled 'hot' or 'cold', the strategy of the game is simple: try to pass on a 'cold' number to your opponent. This is always possible provided you are being presented a 'hot' number. Which numbers are 'hot' and which numbers are 'cold' can be determined recursively: 1. the number 0 is 'cold', whilst 1 is 'hot' 2. if all numbers 1 .. N have been classified as either 'hot' or 'cold', then • the number N+1 is 'cold' if only 'hot' numbers can be reached by subtracting a positive square • the number N+1 is 'hot' if at least one 'cold' number can be reached by subtracting a positive square Using this algorithm, a list of cold numbers is easily derived: 0, 2, 5, 7, 10, 12, 15, 17, 20, 22, 34, 39, 44, … (sequence A030193 in the OEIS) A faster divide and conquer algorithm can compute the same sequence of numbers, up to any threshold $n$, in time $O(n\log ^{2}n)$.[4] There are infinitely many cold numbers. More strongly, the number of cold numbers up to some threshold $n$ must be at least proportional to the square root of $n$, for otherwise there would not be enough of them to provide winning moves from all the hot numbers.[3] Cold numbers tend to end in 0, 2, 4, 5, 7, or 9. Cold values that end with other digits are quite uncommon.[3] This holds in particular for cold numbers ending in 6. Out of all the over 180,000 cold numbers less than 40 million, only one ends in a 6: 11,356.[5] No two cold numbers can differ by a square, because if they did then a move from the larger of the two to the smaller would be winning, contradicting the assumption that they are both cold. Therefore, by the Furstenberg–Sárközy theorem, the natural density of the cold numbers is zero. That is, for every $\epsilon >0$, and for all sufficiently large $n$, the fraction of the numbers up to $n$ that are cold is less than $\epsilon $. More strongly, for every $n$ there are $O(n/(\log n)^{{\frac {1}{4}}\log \log \log \log n})$ cold numbers up to $n$.[6] The exact growth rate of the cold numbers remains unknown, but experimentally the number of cold positions up to any given threshold $n$ appears to be roughly $n^{0.7}$.[4] Extensions The game subtract-a-square can also be played with multiple numbers. At each turn the player to make a move first selects one of the numbers, and then subtracts a square from it. Such a 'sum of normal games' can be analysed using the Sprague–Grundy theorem. This theorem states that each position in the game subtract-a-square may be mapped onto an equivalent nim heap size. Optimal play consists of moving to a collection of numbers such that the nim-sum of their equivalent nim heap sizes is zero, when this is possible. The equivalent nim heap size of a position may be calculated as the minimum excluded value of the equivalent sizes of the positions that can be reached by a single move. For subtract-a-square positions of values 0, 1, 2, ... the equivalent nim heap sizes are 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 0, 1, 0, 1, 2, 3, 2, 3, 4, … (sequence A014586 in the OEIS). In particular, a position of subtract-a-square is cold if and only if its equivalent nim heap size is zero. It is also possible to play variants of this game using other allowed moves than the square numbers. For instance, Golomb defined an analogous game based on the Moser–de Bruijn sequence, a sequence that grows at a similar asymptotic rate to the squares, for which it is possible to determine more easily the set of cold positions and to define an easily computed optimal move strategy.[3] Additional goals may also be added for the players without changing the winning conditions. For example, the winner can be given a "score" based on how many moves it took to win (the goal being to obtain the lowest possible score) and the loser given the goal to force the winner to take as long as possible to reach victory. With this additional pair of goals and an assumption of optimal play by both players, the scores for starting positions 0, 1, 2, ... are 0, 1, 2, 3, 1, 2, 3, 4, 5, 1, 4, 3, 6, 7, 3, 4, 1, 8, 3, 5, 6, 3, 8, 5, 5, 1, 5, 3, 7, … (sequence A338027 in the OEIS). Misère game Subtract-a-square can also be played as a misère game, in which the player to make the last subtraction loses. The recursive algorithm to determine 'hot' and 'cold' numbers for the misère game is the same as that for the normal game, except that for the misère game the number 1 is 'cold' whilst 2 is 'hot'. It follows that the cold numbers for the misère variant are the cold numbers for the normal game shifted by 1: Misère play 'cold' numbers: 1, 3, 6, 8, 11, 13, 16, 18, 21, 23, 35, 40, 45, ... See also • Nim • Wythoff's game References 1. Silverman, David L. (1971), "61. Subtract-a-square", Your Move: Logic, Math and Word Puzzles for Enthusiasts, Dover Publications, p. 143, ISBN 9780486267319 2. Dunn, Angela (1980), "Subtract-a-square", Mathematical Bafflers, Dover Publications, p. 102, ISBN 9780486239613 3. Golomb, Solomon W. (1966), "A mathematical investigation of games of "take-away"", Journal of Combinatorial Theory, 1 (4): 443–458, doi:10.1016/S0021-9800(66)80016-9, MR 0209015. 4. Eppstein, David (2018), "Faster evaluation of subtraction games", in Ito, Hiro; Leonardi, Stefano; Pagli, Linda; Prencipe, Giuseppe (eds.), Proc. 9th International Conference on Fun with Algorithms (FUN 2018), Leibniz International Proceedings in Informatics (LIPIcs), vol. 100, Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, pp. 20:1–20:12, doi:10.4230/lipics.fun.2018.20, ISBN 9783959770675, S2CID 4952124 5. Bush, David (October 12, 1992), "the uniqueness of 11,356", sci.math 6. Pintz, János; Steiger, W. L.; Szemerédi, Endre (1988), "On sets of natural numbers whose difference set contains no squares", Journal of the London Mathematical Society, Second Series, 37 (2): 219–231, doi:10.1112/jlms/s2-37.2.219, MR 0928519.
Wikipedia
Subtraction (disambiguation) Subtraction is a mathematical operation. Look up subtraction or subtract in Wiktionary, the free dictionary. Subtraction or subtract may also refer to: • - (album), pronounced "Subtract", a 2023 album by Ed Sheeran • Subtraction (film), a 2022 Iranian thriller • Subtraction (Babyland song), from the 2008 album Cavecraft • Subtraction (Sepultura song), from the 1991 album Arise See also • Minus sign • Hyphen (disambiguation)
Wikipedia
Subtraction game In combinatorial game theory, a subtraction game is an abstract strategy game whose state can be represented by a natural number or vector of numbers (for instance, the numbers of game tokens in piles of tokens, or the positions of pieces on board) and in which the allowed moves reduce these numbers.[1][2] Often, the moves of the game allow any number to be reduced by subtracting a value from a specified subtraction set, and different subtraction games vary in their subtraction sets.[1] These games also vary in whether the last player to move wins (the normal play convention) or loses (misère play convention).[2] Another winning convention that has also been used is that a player who moves to a position with all numbers zero wins, but that any other position with no moves possible is a draw.[1] Examples Examples of notable subtraction games include the following: • Nim is a game whose state consists of multiple piles of tokens, such as coins or matchsticks, and a valid move removes any number of tokens from a single pile. Nim has a well-known optimal strategy in which the goal at each move is to reach a set of piles whose nim-sum is zero, and this strategy is central to the Sprague–Grundy theorem of optimal play in impartial games. However, when playing only with a single pile of tokens, optimal play is trivial (simply remove all the tokens in a single move).[3] • Subtract a square is a variation of nim in which only square numbers of tokens can be removed in a single move. The resulting game has a non-trivial strategy even for a single pile of tokens; the Furstenberg–Sárközy theorem implies that its winning positions have density zero among the integers.[4] • Fibonacci nim is another variation of nim in which the allowed moves depend on the previous moves to the same pile of tokens. On the first move to a pile, it is forbidden to take the whole pile, and on subsequent moves, the amount subtracted must be at most twice the previous amount removed from the same pile.[5] • Wythoff's game is played by placing a chess queen on a large chessboard and, at each step, moving it (in the normal manner of a chess queen) towards the bottom side, left side, or bottom left corner of the board. This game may be equivalently described with two piles of tokens, from which each move may remove any number of tokens from one or both piles, removing the same amount from each pile when both piles are reduced. It has an optimal strategy involving Beatty sequences and the golden ratio.[6] Theory Subtraction games are generally impartial games, meaning that the set of moves available in a given position does not depend on the player whose turn it is to move. For such a game, the states can be divided up into ${\mathcal {P}}$-positions (positions in which the previous player, who just moved, is winning) and ${\mathcal {N}}$-positions (positions in which the next player to move is winning), and an optimal game playing strategy consists of moving to a ${\mathcal {P}}$-position whenever this is possible. For instance, with the normal play convention and a single pile of tokens, every number in the subtraction set is an ${\mathcal {N}}$-position, because a player can win from such a number by moving to zero.[2] For normal-play subtraction games in which there are multiple numbers, in which each move reduces only one of these numbers, and in which the reductions that are possible from a given number depend only on that number and not on the rest of the game state, the Sprague–Grundy theorem can be used to calculate a "nim value" of each number, a number representing an equivalent position in the game of nim, such that the value of the overall game state is the nim-sum of its nim-values. In this way, the optimal strategy for the overall game can be reduced to the calculation of nim-values for a simplified set of game positions, those in which there is only a single number.[7] The nim-values are zero for ${\mathcal {P}}$-positions, and nonzero for ${\mathcal {N}}$-positions; according to a theorem of Tom Ferguson, the single-number positions with nim-value one are exactly the numbers obtained by adding the smallest value in the subtraction set to a ${\mathcal {P}}$-position. Ferguson's result leads to an optimal strategy in multi-pile misère subtraction games, with only a small amount of change from the normal play strategy.[8] For a subtraction game with a single pile of tokens and a fixed (but possibly infinite) subtraction set, if the subtraction set has arbitrarily large gaps between its members, then the set of ${\mathcal {P}}$-positions of the game is necessarily infinite.[9] For every subtraction game with a finite subtraction set, the nim-values are bounded and both the partition into ${\mathcal {P}}$-positions and ${\mathcal {N}}$-positions and the sequence of nim-values are eventually periodic. The period may be significantly larger than the maximum value $x$ in the subtraction set, but is at most $2^{x}$.[10] However, there exist infinite subtraction sets that produce bounded nim-values but an aperiodic sequence of these values.[11] Complexity For subtraction games with a fixed (but possibly infinite) subtraction set, such as subtract a square, the partition into P-positions and N-positions of the numbers up to a given value $n$ may be computed in time $O(n\log ^{2}n)$. The nim-values of all numbers up to $n$ may be computed in time $O(\min(ns,nm\log ^{2}n))$ where $s$ denotes the size of the subtraction set (up to $n$) and $m$ denotes the largest nim-value occurring in this computation.[12] For generalizations of subtraction games, played on vectors of natural numbers with a subtraction set whose vectors can have positive as well as negative coefficients, it is an undecidable problem to determine whether two such games have the same P-positions and N-positions.[13] See also • Grundy's game and octal games, generalizations of subtraction games in which a move may split a pile of tokens in two Notes 1. Golomb (1966). 2. Berlekamp, Conway & Guy (2001), "Subtraction games", pp. 83–86. 3. Bouton (1901–1902); Golomb (1966); Berlekamp, Conway & Guy (2001), "Green hackenbush, the game of nim, and nimbers", pp. 40–42. 4. Golomb (1966); Eppstein (2018) 5. Whinihan (1963); Larsson & Rubinstein-Salzedo (2016) 6. Wythoff (1907); Coxeter (1953) 7. Golomb (1966); Berlekamp, Conway & Guy (2001), "Games with heaps", p. 82. 8. Ferguson (1974), p. 164; Berlekamp, Conway & Guy (2001), "Ferguson's pairing property", p. 86. 9. Golomb (1966), Theorem 4.1, p. 451. 10. Golomb (1966), Example (a), p. 454; Althöfer & Bültermann (1995) 11. Larsson & Fox (2015). 12. Eppstein (2018). 13. Larsson & Wästlund (2013). References • Althöfer, Ingo; Bültermann, Jörg (1995), "Superlinear period lengths in some subtraction games", Theoretical Computer Science, 148 (1): 111–119, doi:10.1016/0304-3975(95)00019-S, MR 1347670 • Berlekamp, Elwyn R.; Conway, John H.; Guy, Richard K. (2001), Winning Ways for your Mathematical Plays, vol. 1 (2nd ed.), A K Peters • Bouton, Charles L. (1901–1902), "Nim, a game with a complete mathematical theory", Annals of Mathematics, Second Series, 3 (1/4): 35–39, doi:10.2307/1967631, JSTOR 1967631 • Coxeter, H. S. M. (1953), "The golden section, phyllotaxis, and Wythoff's game", Scripta Mathematica, 19: 135–143, MR 0057548 • Eppstein, David (2018), "Faster evaluation of subtraction games", in Ito, Hiro; Leonardi, Stefano; Pagli, Linda; Prencipe, Giuseppe (eds.), Proc. 9th International Conference on Fun with Algorithms (FUN 2018), Leibniz International Proceedings in Informatics (LIPIcs), vol. 100, Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, pp. 20:1–20:12, doi:10.4230/lipics.fun.2018.20 • Ferguson, T. S. (1974), "On sums of graph games with last player losing", International Journal of Game Theory, 3: 159–167, doi:10.1007/BF01763255, MR 0384169 • Golomb, Solomon W. (1966), "A mathematical investigation of games of "take-away"", Journal of Combinatorial Theory, 1: 443–458, doi:10.1016/S0021-9800(66)80016-9, MR 0209015 • Larsson, Urban; Fox, Nathan (2015), "An aperiodic subtraction game of nim-dimension two" (PDF), Journal of Integer Sequences, 18 (7), Article 15.7.4, arXiv:1503.05751, MR 3370791 • Larsson, Urban; Rubinstein-Salzedo, Simon (2016), "Grundy values of Fibonacci nim", International Journal of Game Theory, 45 (3): 617–625, arXiv:1410.0332, doi:10.1007/s00182-015-0473-y, MR 3538534 • Larsson, Urban; Wästlund, Johan (2013), "From heaps of matches to the limits of computability", Electronic Journal of Combinatorics, 20 (3): P41:1–P41:12, arXiv:1202.0664, MR 3118949 • Whinihan, Michael J. (1963), "Fibonacci nim" (PDF), Fibonacci Quarterly, 1 (4): 9–13 • Wythoff, W. A. (1907), "A modification of the game of nim", Nieuw Archief voor Wiskunde, 7 (2): 199–202
Wikipedia
Gaussian logarithm In mathematics, addition and subtraction logarithms or Gaussian logarithms can be utilized to find the logarithms of the sum and difference of a pair of values whose logarithms are known, without knowing the values themselves.[1] Their mathematical foundations trace back to Zecchini Leonelli[2][3] and Carl Friedrich Gauss[4][1][5] in the early 1800s.[2][3][4][1][5] The operations of addition and subtraction can be calculated by the formula: $\log _{b}(|X|+|Y|)=x+s_{b}(y-x)$ $\log _{b}(||X|-|Y||)=x+d_{b}(y-x),$ where $x=\log _{b}(X)$, $y=\log _{b}(Y)$, the "sum" function is defined by $s_{b}(z)=\log _{b}(1+b^{z})$, and the "difference" function by $d_{b}(z)=\log _{b}(|1-b^{z}|)$. The functions $s_{b}(z)$ and $d_{b}(z)$ are also known as Gaussian logarithms. For natural logarithms with $b=e$ the following identities with hyperbolic functions exist: $s_{e}(z)=\ln 2+{\frac {z}{2}}+\ln \left(\cosh {\frac {z}{2}}\right)$ $d_{e}(z)=\ln 2+{\frac {z}{2}}+\ln \left|\sinh {\frac {z}{2}}\right|$ This shows that $s_{e}$ has a Taylor expansion where all but the first term are rational and all odd terms except the linear one are zero. The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction. See also • Softplus operation in neural networks • Zech's logarithm • Logarithm table • Logarithmic number system (LNS) References 1. "Logarithm: Addition and Subtraction, or Gaussian Logarithms". Encyclopædia Britannica Eleventh Edition. 2. Leonelli, Zecchini (1803) [1802]. Supplément logarithmique. Théorie des logarithmes additionels et diductifs (in French). Bordeaux: Brossier. (NB. 1802/1803 is the year XI. in the French Republican Calendar.) 3. Leonhardi, Gottfried Wilhelm (1806). LEONELLIs logarithmische Supplemente, als ein Beitrag, Mängel der gewöhnlichen Logarithmentafeln zu ersetzen. Aus dem Französischen nebst einigen Zusätzen von GOTTFRIED WILHELM LEONHARDI, Souslieutenant beim kurfürstlichen sächsischen Feldartilleriecorps (in German). Dresden: Walther'sche Hofbuchhandlung. (NB. An expanded translation of Zecchini Leonelli's Supplément logarithmique. Théorie des logarithmes additionels et diductifs.) 4. Gauß, Johann Carl Friedrich (1808-02-12). "LEONELLI, Logarithmische Supplemente". Allgemeine Literaturzeitung (in German). Halle-Leipzig (45): 353–356. 5. Dunnington, Guy Waldo (2004) [1955]. Gray, Jeremy; Dohse, Fritz-Egbert (eds.). Carl Friedrich Gauss - Titan of Science. Spectrum series (revised ed.). Mathematical Association of America (MAA). ISBN 978-0-88385-547-8. ISBN 0-88385-547-X. Further reading • Stark, Bruce D. (1997) [1995]. Stark Tables for Clearing the Lunar Distance and Finding Universal Time by Sextant Observation Including a Convenient Way to Sharpen Celestial Navigation Skills While On Land (2 ed.). Starpath Publications. ISBN 978-0914025214. 091402521X. Retrieved 2015-12-02. (NB. Contains a table of Gaussian logarithms lg(1+10−x).) • Kalivoda, Jan (2003-07-30). "Bruce Stark - Tables for Clearing the Lunar Distance and Finding G.M.T. by Sextant Observation (1995, 1997)" (Review). Prague, Czech Republic. Archived from the original on 2004-01-12. Retrieved 2015-12-02. …] Bruce Stark […] uses the Gaussian logarithms that make possible to remain in world of logarithms all the time of calculation and transform an addition of natural numbers to the addition and subtraction of their common and special logarithmic values by use of a special table. It is much easier than to convert logs to their natural values, to add them and again to convert them to logs. Moreover, Gaussian logs yield greater accuracy of result than the traditional computing method and help 5-digit log values to be sufficiently accurate for this method. […] The use of "Gaussians" by Bruce is original in the field of navigation. I don't know another example of using them by seamen or aviators - with the exception of Soviet navigators, which had Gaussians in their standard table sets up to ca. 1960. […] haversine that was not allowed to the Soviet navigational practice. […] Gaussians coact peacefully with haversines in rationalizing the LD procedure […] • Kremer, Hermann (2002-08-29). "Gauss'sche Additionslogarithmen feiern 200. Geburtstag". de.sci.mathematik (in German). Archived from the original on 2018-07-07. Retrieved 2018-07-07. • Kühn, Klaus (2008). "C. F. Gauß und die Logarithmen" (PDF) (in German). Alling-Biburg, Germany. Archived (PDF) from the original on 2018-07-14. Retrieved 2018-07-14.
Wikipedia
Subtraction Subtraction (which is signified by the minus sign −) is one of the four arithmetic operations along with addition, multiplication and division. Subtraction is an operation that represents removal of objects from a collection.[1] For example, in the adjacent picture, there are 5 − 2 peaches—meaning 5 peaches with 2 taken away, resulting in a total of 3 peaches. Therefore, the difference of 5 and 2 is 3; that is, 5 − 2 = 3. While primarily associated with natural numbers in arithmetic, subtraction can also represent removing or decreasing physical and abstract quantities using different kinds of objects including negative numbers, fractions, irrational numbers, vectors, decimals, functions, and matrices.[2] "Subtract" redirects here. For other uses, see Subtraction (disambiguation). Arithmetic operations Addition (+) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,+\,{\text{term}}\\\scriptstyle {\text{summand}}\,+\,{\text{summand}}\\\scriptstyle {\text{addend}}\,+\,{\text{addend}}\\\scriptstyle {\text{augend}}\,+\,{\text{addend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{sum}}$ Subtraction (−) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,-\,{\text{term}}\\\scriptstyle {\text{minuend}}\,-\,{\text{subtrahend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{difference}}$ Multiplication (×) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{factor}}\,\times \,{\text{factor}}\\\scriptstyle {\text{multiplier}}\,\times \,{\text{multiplicand}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{product}}$ Division (÷) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\frac {\scriptstyle {\text{dividend}}}{\scriptstyle {\text{divisor}}}}\\[1ex]\scriptstyle {\frac {\scriptstyle {\text{numerator}}}{\scriptstyle {\text{denominator}}}}\end{matrix}}\right\}\,=\,$ $\scriptstyle \left\{{\begin{matrix}\scriptstyle {\text{fraction}}\\\scriptstyle {\text{quotient}}\\\scriptstyle {\text{ratio}}\end{matrix}}\right.$ Exponentiation (^) $\scriptstyle {\text{base}}^{\text{exponent}}\,=\,$ $\scriptstyle {\text{power}}$ nth root (√) $\scriptstyle {\sqrt[{\text{degree}}]{\scriptstyle {\text{radicand}}}}\,=\,$ $\scriptstyle {\text{root}}$ Logarithm (log) $\scriptstyle \log _{\text{base}}({\text{anti-logarithm}})\,=\,$ $\scriptstyle {\text{logarithm}}$ In a sense, subtraction is the inverse of addition. That is, c = a − b if and only if c + b = a. In words: the difference of two numbers is the number that gives the first one when added to the second one. Subtraction follows several important patterns. It is anticommutative, meaning that changing the order changes the sign of the answer. It is also not associative, meaning that when one subtracts more than two numbers, the order in which subtraction is performed matters. Because 0 is the additive identity, subtraction of it does not change a number. Subtraction also obeys predictable rules concerning related operations, such as addition and multiplication. All of these rules can be proven, starting with the subtraction of integers and generalizing up through the real numbers and beyond. General binary operations that follow these patterns are studied in abstract algebra. Notation and terminology Subtraction is usually written using the minus sign "−" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, $2-1=1$ (pronounced as "two minus one equals one") $4-2=2$ (pronounced as "four minus two equals two") $6-3=3$ (pronounced as "six minus three equals three") $4-6=-2$ (pronounced as "four minus six equals negative two") There are also situations where subtraction is "understood", even though no symbol appears: • A column of two numbers, with the lower number in red, usually indicates that the lower number in the column is to be subtracted, with the difference written below, under a line. This is most common in accounting. Formally, the number being subtracted is known as the subtrahend,[3][4] while the number it is subtracted from is the minuend.[3][4] The result is the difference.[3][4][2][5] That is, ${\rm {minuend}}-{\rm {subtrahend}}={\rm {difference}}$. All of this terminology derives from Latin. "Subtraction" is an English word derived from the Latin verb subtrahere, which in turn is a compound of sub "from under" and trahere "to pull". Thus, to subtract is to draw from below, or to take away.[6] Using the gerundive suffix -nd results in "subtrahend", "thing to be subtracted".[lower-alpha 1] Likewise, from minuere "to reduce or diminish", one gets "minuend", which means "thing to be diminished". Of integers and real numbers Integers Imagine a line segment of length b with the left end labeled a and the right end labeled c. Starting from a, it takes b steps to the right to reach c. This movement to the right is modeled mathematically by addition: a + b = c. From c, it takes b steps to the left to get back to a. This movement to the left is modeled by subtraction: c − b = a. Now, a line segment labeled with the numbers 1, 2, and 3. From position 3, it takes no steps to the left to stay at 3, so 3 − 0 = 3. It takes 2 steps to the left to get to position 1, so 3 − 2 = 1. This picture is inadequate to describe what would happen after going 3 steps to the left of position 3. To represent such an operation, the line must be extended. To subtract arbitrary natural numbers, one begins with a line containing every natural number (0, 1, 2, 3, 4, 5, 6, ...). From 3, it takes 3 steps to the left to get to 0, so 3 − 3 = 0. But 3 − 4 is still invalid, since it again leaves the line. The natural numbers are not a useful context for subtraction. The solution is to consider the integer number line (..., −3, −2, −1, 0, 1, 2, 3, ...). This way, it takes 4 steps to the left from 3 to get to −1: 3 − 4 = −1. Natural numbers Subtraction of natural numbers is not closed: the difference is not a natural number unless the minuend is greater than or equal to the subtrahend. For example, 26 cannot be subtracted from 11 to give a natural number. Such a case uses one of two approaches: 1. Conclude that 26 cannot be subtracted from 11; subtraction becomes a partial function. 2. Give the answer as an integer representing a negative number, so the result of subtracting 26 from 11 is −15. Real numbers The field of real numbers can be defined specifying only two binary operations, addition and multiplication, together with unary operations yielding additive and multiplicative inverses. The subtraction of a real number (the subtrahend) from another (the minuend) can then be defined as the addition of the minuend and the additive inverse of the subtrahend. For example, 3 − π = 3 + (−π). Alternatively, instead of requiring these unary operations, the binary operations of subtraction and division can be taken as basic. Properties Anti-commutativity Subtraction is anti-commutative, meaning that if one reverses the terms in a difference left-to-right, the result is the negative of the original result. Symbolically, if a and b are any two numbers, then a − b = −(b − a). Non-associativity Subtraction is non-associative, which comes up when one tries to define repeated subtraction. In general, the expression "a − b − c" can be defined to mean either (a − b) − c or a − (b − c), but these two possibilities lead to different answers. To resolve this issue, one must establish an order of operations, with different orders yielding different results. Predecessor In the context of integers, subtraction of one also plays a special role: for any integer a, the integer (a − 1) is the largest integer less than a, also known as the predecessor of a. Units of measurement When subtracting two numbers with units of measurement such as kilograms or pounds, they must have the same unit. In most cases, the difference will have the same unit as the original numbers. Percentages Changes in percentages can be reported in at least two forms, percentage change and percentage point change. Percentage change represents the relative change between the two quantities as a percentage, while percentage point change is simply the number obtained by subtracting the two percentages.[7][8][9] As an example, suppose that 30% of widgets made in a factory are defective. Six months later, 20% of widgets are defective. The percentage change is 20% − 30%/30% = −1/3 = −33+1/3%, while the percentage point change is −10 percentage points. In computing The method of complements is a technique used to subtract one number from another using only the addition of positive numbers. This method was commonly used in mechanical calculators, and is still used in modern computers. Binary digit Ones' complement 0 1 1 0 To subtract a binary number y (the subtrahend) from another number x (the minuend), the ones' complement of y is added to x and one is added to the sum. The leading digit "1" of the result is then discarded. The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained by inverting each bit (changing "0" to "1" and vice versa). And adding 1 to get the two's complement can be done by simulating a carry into the least significant bit. For example: 01100100 (x, equals decimal 100) - 00010110 (y, equals decimal 22) becomes the sum: 01100100 (x) + 11101001 (ones' complement of y) + 1 (to get the two's complement) —————————— 101001110 Dropping the initial "1" gives the answer: 01001110 (equals decimal 78) The teaching of subtraction in schools Methods used to teach subtraction to elementary school vary from country to country, and within a country, different methods are adopted at different times. In what is known in the United States as traditional mathematics, a specific process is taught to students at the end of the 1st year (or during the 2nd year) for use with multi-digit whole numbers, and is extended in either the fourth or fifth grade to include decimal representations of fractional numbers. In America Almost all American schools currently teach a method of subtraction using borrowing or regrouping (the decomposition algorithm) and a system of markings called crutches.[10][11] Although a method of borrowing had been known and published in textbooks previously, the use of crutches in American schools spread after William A. Brownell published a study—claiming that crutches were beneficial to students using this method.[12] This system caught on rapidly, displacing the other methods of subtraction in use in America at that time. In Europe Some European schools employ a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are also crutches (markings to aid memory), which vary by country.[13][14] Comparing the two main methods Both these methods break up the subtraction as a process of one digit subtractions by place value. Starting with a least significant digit, a subtraction of the subtrahend: sj sj−1 ... s1 from the minuend mk mk−1 ... m1, where each si and mi is a digit, proceeds by writing down m1 − s1, m2 − s2, and so forth, as long as si does not exceed mi. Otherwise, mi is increased by 10 and some other digit is modified to correct for this increase. The American method corrects by attempting to decrease the minuend digit mi+1 by one (or continuing the borrow leftwards until there is a non-zero digit from which to borrow). The European method corrects by increasing the subtrahend digit si+1 by one. Example: 704 − 512. ${\begin{array}{rrrr}&\color {Red}-1\\&C&D&U\\&7&0&4\\&5&1&2\\\hline &1&9&2\\\end{array}}{\begin{array}{l}{\color {Red}\longleftarrow {\rm {carry}}}\\\\\longleftarrow \;{\rm {Minuend}}\\\longleftarrow \;{\rm {Subtrahend}}\\\longleftarrow {\rm {Rest\;or\;Difference}}\\\end{array}}$ The minuend is 704, the subtrahend is 512. The minuend digits are m3 = 7, m2 = 0 and m1 = 4. The subtrahend digits are s3 = 5, s2 = 1 and s1 = 2. Beginning at the one's place, 4 is not less than 2 so the difference 2 is written down in the result's one's place. In the ten's place, 0 is less than 1, so the 0 is increased by 10, and the difference with 1, which is 9, is written down in the ten's place. The American method corrects for the increase of ten by reducing the digit in the minuend's hundreds place by one. That is, the 7 is struck through and replaced by a 6. The subtraction then proceeds in the hundreds place, where 6 is not less than 5, so the difference is written down in the result's hundred's place. We are now done, the result is 192. The Austrian method does not reduce the 7 to 6. Rather it increases the subtrahend hundreds digit by one. A small mark is made near or below this digit (depending on the school). Then the subtraction proceeds by asking what number when increased by 1, and 5 is added to it, makes 7. The answer is 1, and is written down in the result's hundreds place. There is an additional subtlety in that the student always employs a mental subtraction table in the American method. The Austrian method often encourages the student to mentally use the addition table in reverse. In the example above, rather than adding 1 to 5, getting 6, and subtracting that from 7, the student is asked to consider what number, when increased by 1, and 5 is added to it, makes 7. Subtraction by hand Austrian method Example: • 1 + ... = 3 • The difference is written under the line. • 9 + ... = 5 The required sum (5) is too small. • So, we add 10 to it and put a 1 under the next higher place in the subtrahend. • 9 + ... = 15 Now we can find the difference as before. • (4 + 1) + ... = 7 • The difference is written under the line. • The total difference. Subtraction from left to right Example: • 7 − 4 = 3 This result is only penciled in. • Because the next digit of the minuend is smaller than the subtrahend, we subtract one from our penciled-in-number and mentally add ten to the next. • 15 − 9 = 6 • Because the next digit in the minuend is not smaller than the subtrahend, we keep this number. • 3 − 1 = 2 American method In this method, each digit of the subtrahend is subtracted from the digit above it starting from right to left. If the top number is too small to subtract the bottom number from it, we add 10 to it; this 10 is "borrowed" from the top digit to the left, which we subtract 1 from. Then we move on to subtracting the next digit and borrowing as needed, until every digit has been subtracted. Example: • 3 − 1 = ... • We write the difference under the line. • 5 − 9 = ... The minuend (5) is too small! • So, we add 10 to it. The 10 is "borrowed" from the digit on the left, which goes down by 1. • 15 − 9 = ... Now the subtraction works, and we write the difference under the line. • 6 − 4 = ... • We write the difference under the line. • The total difference. Trade first A variant of the American method where all borrowing is done before all subtraction.[15] Example: • 1 − 3 = not possible. We add a 10 to the 1. Because the 10 is "borrowed" from the nearby 5, the 5 is lowered by 1. • 4 − 9 = not possible. So we proceed as in step 1. • Working from right to left: 11 − 3 = 8 • 14 − 9 = 5 • 6 − 4 = 2 Partial differences The partial differences method is different from other vertical subtraction methods because no borrowing or carrying takes place. In their place, one places plus or minus signs depending on whether the minuend is greater or smaller than the subtrahend. The sum of the partial differences is the total difference.[16] Example: • The smaller number is subtracted from the greater: 700 − 400 = 300 Because the minuend is greater than the subtrahend, this difference has a plus sign. • The smaller number is subtracted from the greater: 90 − 50 = 40 Because the minuend is smaller than the subtrahend, this difference has a minus sign. • The smaller number is subtracted from the greater: 3 − 1 = 2 Because the minuend is greater than the subtrahend, this difference has a plus sign. • +300 − 40 + 2 = 262 Counting up Instead of finding the difference digit by digit, one can count up the numbers between the subtrahend and the minuend.[17] Example: 1234 − 567 = can be found by the following steps: • 567 + 3 = 570 • 570 + 30 = 600 • 600 + 400 = 1000 • 1000 + 234 = 1234 Add up the value from each step to get the total difference: 3 + 30 + 400 + 234 = 667. Breaking up the subtraction Another method that is useful for mental arithmetic is to split up the subtraction into small steps.[18] Example: 1234 − 567 = can be solved in the following way: • 1234 − 500 = 734 • 734 − 60 = 674 • 674 − 7 = 667 Same change The same change method uses the fact that adding or subtracting the same number from the minuend and subtrahend does not change the answer. One simply adds the amount needed to get zeros in the subtrahend.[19] Example: "1234 − 567 =" can be solved as follows: • 1234 − 567 = 1237 − 570 = 1267 − 600 = 667 See also • Absolute difference • Decrement • Elementary arithmetic • Method of complements • Negative number • Plus and minus signs Notes 1. "Subtrahend" is shortened by the inflectional Latin suffix -us, e.g. remaining un-declined as in numerus subtrahendus "the number to be subtracted". References 1. "What is to Subtract?". SplashLearn. Retrieved 2022-12-13. 2. Weisstein, Eric W. "Subtraction". mathworld.wolfram.com. Retrieved 2020-08-26. 3. Schmid, Hermann (1974). Decimal Computation (1 ed.). Binghamton, NY: John Wiley & Sons. ISBN 978-0-471-76180-8. 4. Schmid, Hermann (1983) [1974]. Decimal Computation (1 (reprint) ed.). Malabar, FL: Robert E. Krieger Publishing Company. ISBN 978-0-89874-318-0. 5. "Subtraction". www.mathsisfun.com. Retrieved 2020-08-26. 6. "Subtraction". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) 7. Paul E. Peterson, Michael Henderson, Martin R. West (2014) Teachers Versus the Public: What Americans Think about Schools and How to Fix Them Brookings Institution Press, p. 163 8. Janet Kolodzy (2006) Convergence Journalism: Writing and Reporting across the News Media Rowman & Littlefield Publishers, p. 180 9. David Gillborn (2008) Racism and Education: Coincidence Or Conspiracy? Routledge p. 46 10. Klapper, Paul (1916). The Teaching of Arithmetic: A Manual for Teachers. pp. 80–. Retrieved 2016-03-11. 11. Susan Ross and Mary Pratt-Cotter. 2000. "Subtraction in the United States: An Historical Perspective," The Mathematics Educator 8(1):4–11. p. 8: "This new version of the decomposition algorithm [i.e., using Brownell's crutch] has so completely dominated the field that it is rare to see any other algorithm used to teach subtraction today [in America]." 12. Ross, Susan C.; Pratt-Cotter, Mary (1999). "Subtraction From a Historical Perspective". School Science and Mathematics. 99 (7): 389–93. doi:10.1111/j.1949-8594.1999.tb17499.x. 13. Klapper 1916, pp. 177–. 14. David Eugene Smith (1913). The Teaching of Arithmetic. Ginn. pp. 77–. Retrieved 2016-03-11. 15. The Many Ways of Arithmetic in UCSMP Everyday Mathematics Archived 2014-02-25 at the Wayback Machine Subtraction: Trade First 16. Partial-Differences Subtraction Archived 2014-06-23 at the Wayback Machine; The Many Ways of Arithmetic in UCSMP Everyday Mathematics Archived 2014-02-25 at the Wayback Machine Subtraction: Partial Differences 17. The Many Ways of Arithmetic in UCSMP Everyday Mathematics Archived 2014-02-25 at the Wayback Machine Subtraction: Counting Up 18. The Many Ways of Arithmetic in UCSMP Everyday Mathematics Archived 2014-02-25 at the Wayback Machine Subtraction: Left to Right Subtraction 19. The Many Ways of Arithmetic in UCSMP Everyday Mathematics Subtraction: Same Change Rule Bibliography • Brownell, W.A. (1939). Learning as reorganization: An experimental study in third-grade arithmetic, Duke University Press. • Subtraction in the United States: An Historical Perspective, Susan Ross, Mary Pratt-Cotter, The Mathematics Educator, Vol. 8, No. 1 (original publication) and Vol. 10, No. 1 (reprint.) PDF External links Look up subtraction in Wiktionary, the free dictionary. Wikimedia Commons has media related to Subtraction. • "Subtraction", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Printable Worksheets: Subtraction Worksheets, One Digit Subtraction, Two Digit Subtraction, Four Digit Subtraction, and More Subtraction Worksheets • Subtraction Game at cut-the-knot • Subtraction on a Japanese abacus selected from Abacus: Mystery of the Bead Elementary arithmetic + Addition (+) − Subtraction (−) × Multiplication (× or ·) ÷ Division (÷ or ∕) Hyperoperations Primary • Successor (0) • Addition (1) • Multiplication (2) • Exponentiation (3) • Tetration (4) • Pentation (5) Inverse for left argument • Predecessor (0) • Subtraction (1) • Division (2) • Root extraction (3) • Super-root (4) Inverse for right argument • Predecessor (0) • Subtraction (1) • Division (2) • Logarithm (3) • Super-logarithm (4) Related articles • Ackermann function • Conway chained arrow notation • Grzegorczyk hierarchy • Knuth's up-arrow notation • Steinhaus–Moser notation Authority control: National • France • BnF data • Germany • Israel • United States • Czech Republic
Wikipedia
Successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young Jr. and by Stanley P. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods had been used before the work of Young and Frankel. An example is the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, requiring some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young Jr.[1] Formulation Given a square system of n linear equations with unknown x: $A\mathbf {x} =\mathbf {b} $ where: $A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{bmatrix}}.$ Then A can be decomposed into a diagonal component D, and strictly lower and upper triangular components L and U: $A=D+L+U,$ where $D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}},\quad L={\begin{bmatrix}0&0&\cdots &0\\a_{21}&0&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}},\quad U={\begin{bmatrix}0&a_{12}&\cdots &a_{1n}\\0&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &0\end{bmatrix}}.$ The system of linear equations may be rewritten as: $(D+\omega L)\mathbf {x} =\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} $ for a constant ω > 1, called the relaxation factor. The method of successive over-relaxation is an iterative technique that solves the left hand side of this expression for x, using the previous value for x on the right hand side. Analytically, this may be written as: $\mathbf {x} ^{(k+1)}=(D+\omega L)^{-1}{\big (}\omega \mathbf {b} -[\omega U+(\omega -1)D]\mathbf {x} ^{(k)}{\big )}=L_{w}\mathbf {x} ^{(k)}+\mathbf {c} ,$ where $\mathbf {x} ^{(k)}$ is the kth approximation or iteration of $\mathbf {x} $ and $\mathbf {x} ^{(k+1)}$ is the next or k + 1 iteration of $\mathbf {x} $. However, by taking advantage of the triangular form of (D+ωL), the elements of x(k+1) can be computed sequentially using forward substitution: $x_{i}^{(k+1)}=(1-\omega )x_{i}^{(k)}+{\frac {\omega }{a_{ii}}}\left(b_{i}-\sum _{j<i}a_{ij}x_{j}^{(k+1)}-\sum _{j>i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.$ Convergence The choice of relaxation factor ω is not necessarily easy, and depends upon the properties of the coefficient matrix. In 1947, Ostrowski proved that if $A$ is symmetric and positive-definite then $\rho (L_{\omega })<1$ for $0<\omega <2$. Thus, convergence of the iteration process follows, but we are generally interested in faster convergence rather than just convergence. Convergence Rate The convergence rate for the SOR method can be analytically derived. One needs to assume the following[2][3] • the relaxation parameter is appropriate: $\omega \in (0,2)$ • Jacobi's iteration matrix $C_{\text{Jac}}:=I-D^{-1}A$ has only real eigenvalues • Jacobi's method is convergent: $\mu :=\rho (C_{\text{Jac}})<1$ :=\rho (C_{\text{Jac}})<1} • the matrix decomposition $A=D+L+U$ satisfies the property that $\operatorname {det} (\lambda D+zL+{\tfrac {1}{z}}U)=\operatorname {det} (\lambda D+L+U)$ for any $z\in \mathbb {C} \setminus \{0\}$ and $\lambda \in \mathbb {C} $. Then the convergence rate can be expressed as $\rho (C_{\omega })={\begin{cases}{\frac {1}{4}}\left(\omega \mu +{\sqrt {\omega ^{2}\mu ^{2}-4(\omega -1)}}\right)^{2}\,,&0<\omega \leq \omega _{\text{opt}}\\\omega -1\,,&\omega _{\text{opt}}<\omega <2\end{cases}}$ where the optimal relaxation parameter is given by $\omega _{\text{opt}}:=1+\left({\frac {\mu }{1+{\sqrt {1-\mu ^{2}}}}}\right)^{2}=1+{\frac {\mu ^{2}}{4}}+O(\mu ^{3})\,.$ In particular, for $\omega =1$ (Gauss-Seidel) it holds that $\rho (C_{\omega })=\mu ^{2}=\rho (C_{\text{Jac}})^{2}$. For the optimal $\omega $ we get $\rho (C_{\omega })={\frac {1-{\sqrt {1-\mu ^{2}}}}{1+{\sqrt {1-\mu ^{2}}}}}={\frac {\mu ^{2}}{4}}+O(\mu ^{3})$, which shows SOR is roughly four times more efficient than Gauss–Seidel. The last assumption is satisfied for tridiagonal matrices since $Z(\lambda D+L+U)Z^{-1}=\lambda D+zL+{\tfrac {1}{z}}U$ for diagonal $Z$ with entries $Z_{ii}=z^{i-1}$ and $\operatorname {det} (\lambda D+L+U)=\operatorname {det} (Z(\lambda D+L+U)Z^{-1})$. Algorithm Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows: Inputs: A, b, ω Output: φ Choose an initial guess φ to the solution repeat until convergence for i from 1 until n do set σ to 0 for j from 1 until n do if j ≠ i then set σ to σ + aij φj end if end (j-loop) set φi to (1 − ω)φi + ω(bi − σ) / aii end (i-loop) check if convergence is reached end (repeat) Note $(1-\omega )\phi _{i}+{\frac {\omega }{a_{ii}}}(b_{i}-\sigma )$ can also be written $\phi _{i}+\omega \left({\frac {b_{i}-\sigma }{a_{ii}}}-\phi _{i}\right)$, thus saving one multiplication in each iteration of the outer for-loop. Example We are presented the linear system ${\begin{aligned}4x_{1}-x_{2}-6x_{3}+0x_{4}&=2,\\-5x_{1}-4x_{2}+10x_{3}+8x_{4}&=21,\\0x_{1}+9x_{2}+4x_{3}-2x_{4}&=-12,\\1x_{1}+0x_{2}-7x_{3}+5x_{4}&=-6.\end{aligned}}$ To solve the equations, we choose a relaxation factor $\omega =0.5$ and an initial guess vector $\phi =(0,0,0,0)$. According to the successive over-relaxation algorithm, the following table is obtained, representing an exemplary iteration with approximations, which ideally, but not necessarily, finds the exact solution, (3, −2, 2, 1), in 38 steps. Iteration $x_{1}$ $x_{2}$ $x_{3}$ $x_{4}$ 01 0.25 −2.78125 1.6289062 0.5152344 02 1.2490234 −2.2448974 1.9687712 0.9108547 03 2.070478 −1.6696789 1.5904881 0.76172125 ... ... ... ... ... 37 2.9999998 −2.0 2.0 1.0 38 3.0 −2.0 2.0 1.0 A simple implementation of the algorithm in Common Lisp is offered below. ;; Set the default floating-point format to "long-float" in order to ;; ensure correct operation on a wider range of numbers. (setf *read-default-float-format* 'long-float) (defparameter +MAXIMUM-NUMBER-OF-ITERATIONS+ 100 "The number of iterations beyond which the algorithm should cease its operation, regardless of its current solution. A higher number of iterations might provide a more accurate result, but imposes higher performance requirements.") (declaim (type (integer 0 *) +MAXIMUM-NUMBER-OF-ITERATIONS+)) (defun get-errors (computed-solution exact-solution) "For each component of the COMPUTED-SOLUTION vector, retrieves its error with respect to the expected EXACT-SOLUTION vector, returning a vector of error values. --- While both input vectors should be equal in size, this condition is not checked and the shortest of the twain determines the output vector's number of elements. --- The established formula is the following: Let resultVectorSize = min(computedSolution.length, exactSolution.length) Let resultVector = new vector of resultVectorSize For i from 0 to (resultVectorSize - 1) resultVector[i] = exactSolution[i] - computedSolution[i] Return resultVector" (declare (type (vector number *) computed-solution)) (declare (type (vector number *) exact-solution)) (map '(vector number *) #'- exact-solution computed-solution)) (defun is-convergent (errors &key (error-tolerance 0.001)) "Checks whether the convergence is reached with respect to the ERRORS vector which registers the discrepancy betwixt the computed and the exact solution vector. --- The convergence is fulfilled if and only if each absolute error component is less than or equal to the ERROR-TOLERANCE, that is: For all e in ERRORS, it holds: abs(e) <= errorTolerance." (declare (type (vector number *) errors)) (declare (type number error-tolerance)) (flet ((error-is-acceptable (error) (declare (type number error)) (<= (abs error) error-tolerance))) (every #'error-is-acceptable errors))) (defun make-zero-vector (size) "Creates and returns a vector of the SIZE with all elements set to 0." (declare (type (integer 0 *) size)) (make-array size :initial-element 0.0 :element-type 'number)) (defun successive-over-relaxation (A b omega &key (phi (make-zero-vector (length b))) (convergence-check #'(lambda (iteration phi) (declare (ignore phi)) (>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+)))) "Implements the successive over-relaxation (SOR) method, applied upon the linear equations defined by the matrix A and the right-hand side vector B, employing the relaxation factor OMEGA, returning the calculated solution vector. --- The first algorithm step, the choice of an initial guess PHI, is represented by the optional keyword parameter PHI, which defaults to a zero-vector of the same structure as B. If supplied, this vector will be destructively modified. In any case, the PHI vector constitutes the function's result value. --- The terminating condition is implemented by the CONVERGENCE-CHECK, an optional predicate lambda(iteration phi) => generalized-boolean which returns T, signifying the immediate termination, upon achieving convergence, or NIL, signaling continuant operation, otherwise. In its default configuration, the CONVERGENCE-CHECK simply abides the iteration's ascension to the ``+MAXIMUM-NUMBER-OF-ITERATIONS+'', ignoring the achieved accuracy of the vector PHI." (declare (type (array number (* *)) A)) (declare (type (vector number *) b)) (declare (type number omega)) (declare (type (vector number *) phi)) (declare (type (function ((integer 1 *) (vector number *)) *) convergence-check)) (let ((n (array-dimension A 0))) (declare (type (integer 0 *) n)) (loop for iteration from 1 by 1 do (loop for i from 0 below n by 1 do (let ((rho 0)) (declare (type number rho)) (loop for j from 0 below n by 1 do (when (/= j i) (let ((a[ij] (aref A i j)) (phi[j] (aref phi j))) (incf rho (* a[ij] phi[j]))))) (setf (aref phi i) (+ (* (- 1 omega) (aref phi i)) (* (/ omega (aref A i i)) (- (aref b i) rho)))))) (format T "~&~d. solution = ~a" iteration phi) ;; Check if convergence is reached. (when (funcall convergence-check iteration phi) (return)))) (the (vector number *) phi)) ;; Summon the function with the exemplary parameters. (let ((A (make-array (list 4 4) :initial-contents '(( 4 -1 -6 0 ) ( -5 -4 10 8 ) ( 0 9 4 -2 ) ( 1 0 -7 5 )))) (b (vector 2 21 -12 -6)) (omega 0.5) (exact-solution (vector 3 -2 2 1))) (successive-over-relaxation A b omega :convergence-check #'(lambda (iteration phi) (declare (type (integer 0 *) iteration)) (declare (type (vector number *) phi)) (let ((errors (get-errors phi exact-solution))) (declare (type (vector number *) errors)) (format T "~&~d. errors = ~a" iteration errors) (or (is-convergent errors :error-tolerance 0.0) (>= iteration +MAXIMUM-NUMBER-OF-ITERATIONS+)))))) A simple Python implementation of the pseudo-code provided above. import numpy as np from scipy import linalg def sor_solver(A, b, omega, initial_guess, convergence_criteria): """ This is an implementation of the pseudo-code provided in the Wikipedia article. Arguments: A: nxn numpy matrix. b: n dimensional numpy vector. omega: relaxation factor. initial_guess: An initial solution guess for the solver to start with. convergence_criteria: The maximum discrepancy acceptable to regard the current solution as fitting. Returns: phi: solution vector of dimension n. """ step = 0 phi = initial_guess[:] residual = linalg.norm(A @ phi - b) # Initial residual while residual > convergence_criteria: for i in range(A.shape[0]): sigma = 0 for j in range(A.shape[1]): if j != i: sigma += A[i, j] * phi[j] phi[i] = (1 - omega) * phi[i] + (omega / A[i, i]) * (b[i] - sigma) residual = linalg.norm(A @ phi - b) step += 1 print("Step {} Residual: {:10.6g}".format(step, residual)) return phi # An example case that mirrors the one in the Wikipedia article residual_convergence = 1e-8 omega = 0.5 # Relaxation factor A = np.array([[4, -1, -6, 0], [-5, -4, 10, 8], [0, 9, 4, -2], [1, 0, -7, 5]]) b = np.array([2, 21, -12, -6]) initial_guess = np.zeros(4) phi = sor_solver(A, b, omega, initial_guess, residual_convergence) print(phi) Symmetric successive over-relaxation The version for symmetric matrices A, in which $U=L^{T},\,$ is referred to as Symmetric Successive Over-Relaxation, or (SSOR), in which $P=\left({\frac {D}{\omega }}+L\right){\frac {\omega }{2-\omega }}D^{-1}\left({\frac {D}{\omega }}+U\right),$ and the iterative method is $\mathbf {x} ^{k+1}=\mathbf {x} ^{k}-\gamma ^{k}P^{-1}(A\mathbf {x} ^{k}-\mathbf {b} ),\ k\geq 0.$ The SOR and SSOR methods are credited to David M. Young Jr. Other applications of the method A similar technique can be used for any iterative method. If the original iteration had the form $x_{n+1}=f(x_{n})$ then the modified version would use $x_{n+1}^{\mathrm {SOR} }=(1-\omega )x_{n}^{\mathrm {SOR} }+\omega f(x_{n}^{\mathrm {SOR} }).$ However, the formulation presented above, used for solving systems of linear equations, is not a special case of this formulation if x is considered to be the complete vector. If this formulation is used instead, the equation for calculating the next vector will look like $\mathbf {x} ^{(k+1)}=(1-\omega )\mathbf {x} ^{(k)}+\omega L_{*}^{-1}(\mathbf {b} -U\mathbf {x} ^{(k)}),$ where $L_{*}=L+D$. Values of $\omega >1$ are used to speed up convergence of a slow-converging process, while values of $\omega <1$ are often used to help establish convergence of a diverging iterative process or speed up the convergence of an overshooting process. There are various methods that adaptively set the relaxation parameter $\omega $ based on the observed behavior of the converging process. Usually they help to reach a super-linear convergence for some problems but fail for the others. See also • Jacobi method • Gaussian Belief Propagation • Matrix splitting Notes 1. Young, David M. (May 1, 1950), Iterative methods for solving partial difference equations of elliptical type (PDF), PhD thesis, Harvard University, retrieved 2009-06-15 2. Hackbusch, Wolfgang (2016). "4.6.2". Iterative Solution of Large Sparse Systems of Equations | SpringerLink. Applied Mathematical Sciences. Vol. 95. doi:10.1007/978-3-319-28483-5. ISBN 978-3-319-28481-1. 3. Greenbaum, Anne (1997). "10.1". Iterative Methods for Solving Linear Systems. Frontiers in Applied Mathematics. Vol. 17. doi:10.1137/1.9781611970937. ISBN 978-0-89871-396-1. References • This article incorporates text from the article Successive_over-relaxation_method_-_SOR on CFD-Wiki that is under the GFDL license. • Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. ISBN 0-89871-321-8. • Black, Noel & Moore, Shirley. "Successive Overrelaxation Method". MathWorld. • A. Hadjidimos, Successive overrelaxation (SOR) and related methods, Journal of Computational and Applied Mathematics 123 (2000), 177–199. • Yousef Saad, Iterative Methods for Sparse Linear Systems, 1st edition, PWS, 1996. • Netlib's copy of "Templates for the Solution of Linear Systems", by Barrett et al. • Richard S. Varga 2002 Matrix Iterative Analysis, Second ed. (of 1962 Prentice Hall edition), Springer-Verlag. • David M. Young Jr. Iterative Solution of Large Linear Systems, Academic Press, 1971. (reprinted by Dover, 2003) External links • Module for the SOR Method • Tridiagonal linear system solver based on SOR, in C++ Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software
Wikipedia
Minkowski's second theorem In mathematics, Minkowski's second theorem is a result in the geometry of numbers about the values taken by a norm on a lattice and the volume of its fundamental cell. Setting Let K be a closed convex centrally symmetric body of positive finite volume in n-dimensional Euclidean space Rn. The gauge[1] or distance[2][3] Minkowski functional g attached to K is defined by $g(x)=\inf \left\{\lambda \in \mathbb {R} :x\in \lambda K\right\}.$ Conversely, given a norm g on Rn we define K to be $K=\left\{x\in \mathbb {R} ^{n}:g(x)\leq 1\right\}.$ Let Γ be a lattice in Rn. The successive minima of K or g on Γ are defined by setting the k-th successive minimum λk to be the infimum of the numbers λ such that λK contains k linearly-independent vectors of Γ. We have 0 < λ1 ≤ λ2 ≤ ... ≤ λn < ∞. Statement The successive minima satisfy[4][5][6] ${\frac {2^{n}}{n!}}\operatorname {vol} \left(\mathbb {R} ^{n}/\Gamma \right)\leq \lambda _{1}\lambda _{2}\cdots \lambda _{n}\operatorname {vol} (K)\leq 2^{n}\operatorname {vol} \left(\mathbb {R} ^{n}/\Gamma \right).$ Proof A basis of linearly independent lattice vectors b1, b2, ..., bn can be defined by g(bj) = λj. The lower bound is proved by considering the convex polytope 2n with vertices at ±bj/ λj, which has an interior enclosed by K and a volume which is 2n/n!λ1 λ2...λn times an integer multiple of a primitive cell of the lattice (as seen by scaling the polytope by λj along each basis vector to obtain 2n n-simplices with lattice point vectors). To prove the upper bound, consider functions fj(x) sending points x in $ K$ to the centroid of the subset of points in $ K$ that can be written as $ x+\sum _{i=1}^{j-1}a_{i}b_{i}$ for some real numbers $ a_{i}$. Then the coordinate transform $x'=h(x)=\sum _{i=1}^{n}(\lambda _{i}-\lambda _{i-1})f_{i}(x)/2$ has a Jacobian determinant $ J=\lambda _{1}\lambda _{2}\ldots \lambda _{n}/2^{n}$. If $ p$ and $ q$ are in the interior of $ K$ and $ p-q=\sum _{i=1}^{k}a_{i}b_{i}$(with $ a_{k}\neq 0$) then $(h(p)-h(q))=\sum _{i=0}^{k}c_{i}b_{i}\in \lambda _{k}K$ with $ c_{k}=\lambda _{k}a_{k}/2$, where the inclusion in $ \lambda _{k}K$ (specifically the interior of $ \lambda _{k}K$) is due to convexity and symmetry. But lattice points in the interior of $ \lambda _{k}K$ are, by definition of $ \lambda _{k}$, always expressible as a linear combination of $ b_{1},b_{2},\ldots b_{k-1}$, so any two distinct points of $ K'=h(K)=\{x'\mid h(x)=x'\}$ cannot be separated by a lattice vector. Therefore, $ K'$ must be enclosed in a primitive cell of the lattice (which has volume $ \operatorname {vol} (\mathbb {R} ^{n}/\Gamma )$), and consequently $ \operatorname {vol} (K)/J=\operatorname {vol} (K')\leq \operatorname {vol} (\mathbb {R} ^{n}/\Gamma )$. References 1. Siegel (1989) p.6 2. Cassels (1957) p.154 3. Cassels (1971) p.103 4. Cassels (1957) p.156 5. Cassels (1971) p.203 6. Siegel (1989) p.57 • Cassels, J. W. S. (1957). An introduction to Diophantine approximation. Cambridge Tracts in Mathematics and Mathematical Physics. Vol. 45. Cambridge University Press. Zbl 0077.04801. • Cassels, J. W. S. (1997). An Introduction to the Geometry of Numbers. Classics in Mathematics (Reprint of 1971 ed.). Springer-Verlag. ISBN 978-3-540-61788-4. • Nathanson, Melvyn B. (1996). Additive Number Theory: Inverse Problems and the Geometry of Sumsets. Graduate Texts in Mathematics. Vol. 165. Springer-Verlag. pp. 180–185. ISBN 0-387-94655-1. Zbl 0859.11003. • Schmidt, Wolfgang M. (1996). Diophantine approximations and Diophantine equations. Lecture Notes in Mathematics. Vol. 1467 (2nd ed.). Springer-Verlag. p. 6. ISBN 3-540-54058-X. Zbl 0754.11020. • Siegel, Carl Ludwig (1989). Komaravolu S. Chandrasekharan (ed.). Lectures on the Geometry of Numbers. Springer-Verlag. ISBN 3-540-50629-2. Zbl 0691.10021.
Wikipedia
Successive parabolic interpolation Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or, in general, a function of n variables at 1+n(n+3)/2 points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola. Advantages Only function values are used, and when this method converges to an extremum, it does so with an order of convergence of approximately 1.325. The superlinear rate of convergence is superior to that of other methods with only linear convergence (such as line search). Moreover, not requiring the computation or approximation of function derivatives makes successive parabolic interpolation a popular alternative to other methods that do require them (such as gradient descent and Newton's method). Disadvantages On the other hand, convergence (even to a local extremum) is not guaranteed when using this method in isolation. For example, if the three points are collinear, the resulting parabola is degenerate and thus does not provide a new candidate point. Furthermore, if function derivatives are available, Newton's method is applicable and exhibits quadratic convergence. Improvements Alternating the parabolic iterations with a more robust method (golden-section search is a popular choice) to choose candidates can greatly increase the probability of convergence without hampering the convergence rate. See also • Inverse quadratic interpolation is a related method that uses parabolas to find roots rather than extrema. • Simpson's rule uses parabolas to approximate definite integrals. References Michael Heath (2002). Scientific Computing: An Introductory Survey (2nd ed.). New York: McGraw-Hill. ISBN 0-07-239910-4. Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear Functions • Golden-section search • Interpolation methods • Line search • Nelder–Mead method • Successive parabolic interpolation Gradients Convergence • Trust region • Wolfe conditions Quasi–Newton • Berndt–Hall–Hall–Hausman • Broyden–Fletcher–Goldfarb–Shanno and L-BFGS • Davidon–Fletcher–Powell • Symmetric rank-one (SR1) Other methods • Conjugate gradient • Gauss–Newton • Gradient • Mirror • Levenberg–Marquardt • Powell's dog leg method • Truncated Newton Hessians • Newton's method Constrained nonlinear General • Barrier methods • Penalty methods Differentiable • Augmented Lagrangian methods • Sequential quadratic programming • Successive linear programming Convex optimization Convex minimization • Cutting-plane method • Reduced gradient (Frank–Wolfe) • Subgradient method Linear and quadratic Interior point • Affine scaling • Ellipsoid algorithm of Khachiyan • Projective algorithm of Karmarkar Basis-exchange • Simplex algorithm of Dantzig • Revised simplex algorithm • Criss-cross algorithm • Principal pivoting algorithm of Lemke Combinatorial Paradigms • Approximation algorithm • Dynamic programming • Greedy algorithm • Integer programming • Branch and bound/cut Graph algorithms Minimum spanning tree • Borůvka • Prim • Kruskal Shortest path • Bellman–Ford • SPFA • Dijkstra • Floyd–Warshall Network flows • Dinic • Edmonds–Karp • Ford–Fulkerson • Push–relabel maximum flow Metaheuristics • Evolutionary algorithm • Hill climbing • Local search • Parallel metaheuristics • Simulated annealing • Spiral optimization algorithm • Tabu search • Software
Wikipedia
Successor cardinal In set theory, one can define a successor operation on cardinal numbers in a similar way to the successor operation on the ordinal numbers. The cardinal successor coincides with the ordinal successor for finite cardinals, but in the infinite case they diverge because every infinite ordinal and its successor have the same cardinality (a bijection can be set up between the two by simply sending the last element of the successor to 0, 0 to 1, etc., and fixing ω and all the elements above; in the style of Hilbert's Hotel Infinity). Using the von Neumann cardinal assignment and the axiom of choice (AC), this successor operation is easy to define: for a cardinal number κ we have $\kappa ^{+}=\left|\inf\{\lambda \in \mathrm {ON} \ :\ \kappa <\left|\lambda \right|\}\right|$ :\ \kappa <\left|\lambda \right|\}\right|} , where ON is the class of ordinals. That is, the successor cardinal is the cardinality of the least ordinal into which a set of the given cardinality can be mapped one-to-one, but which cannot be mapped one-to-one back into that set. That the set above is nonempty follows from Hartogs' theorem, which says that for any well-orderable cardinal, a larger such cardinal is constructible. The minimum actually exists because the ordinals are well-ordered. It is therefore immediate that there is no cardinal number in between κ and κ+. A successor cardinal is a cardinal that is κ+ for some cardinal κ. In the infinite case, the successor operation skips over many ordinal numbers; in fact, every infinite cardinal is a limit ordinal. Therefore, the successor operation on cardinals gains a lot of power in the infinite case (relative the ordinal successorship operation), and consequently the cardinal numbers are a very "sparse" subclass of the ordinals. We define the sequence of alephs (via the axiom of replacement) via this operation, through all the ordinal numbers as follows: $\aleph _{0}=\omega $ $\aleph _{\alpha +1}=\aleph _{\alpha }^{+}$ and for λ an infinite limit ordinal, $\aleph _{\lambda }=\bigcup _{\beta <\lambda }\aleph _{\beta }$ If β is a successor ordinal, then $\aleph _{\beta }$ is a successor cardinal. Cardinals that are not successor cardinals are called limit cardinals; and by the above definition, if λ is a limit ordinal, then $\aleph _{\lambda }$ is a limit cardinal. The standard definition above is restricted to the case when the cardinal can be well-ordered, i.e. is finite or an aleph. Without the axiom of choice, there are cardinals that cannot be well-ordered. Some mathematicians have defined the successor of such a cardinal as the cardinality of the least ordinal that cannot be mapped one-to-one into a set of the given cardinality. That is: $\kappa ^{+}=\left|\inf\{\lambda \in \mathrm {ON} \ :\ |\lambda |\nleq \kappa \}\right|$ :\ |\lambda |\nleq \kappa \}\right|} which is the Hartogs number of κ. See also • Cardinal assignment References • Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition). • Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. ISBN 3-540-44085-2. • Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. ISBN 0-444-86839-9. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Successor function In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n + 1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function. Successor operations are also known as zeration in the context of a zeroth hyperoperation: H0(a, b) = 1 + b. In this context, the extension of zeration is addition, which is defined as repeated succession. Overview The successor function is part of the formal language used to state the Peano axioms, which formalise the structure of the natural numbers. In this formalisation, the successor function is a primitive operation on the natural numbers, in terms of which the standard natural numbers and addition is defined. For example, 1 is defined to be S(0), and addition on natural numbers is defined recursively by: m + 0= m, m + S(n)= S(m + n). This can be used to compute the addition of any two natural numbers. For example, 5 + 2 = 5 + S(1) = S(5 + 1) = S(5 + S(0)) = S(S(5 + 0)) = S(S(5)) = S(6) = 7. Several constructions of the natural numbers within set theory have been proposed. For example, John von Neumann constructs the number 0 as the empty set {}, and the successor of n, S(n), as the set n ∪ {n}. The axiom of infinity then guarantees the existence of a set that contains 0 and is closed with respect to S. The smallest such set is denoted by N, and its members are called natural numbers.[1] The successor function is the level-0 foundation of the infinite Grzegorczyk hierarchy of hyperoperations, used to build addition, multiplication, exponentiation, tetration, etc. It was studied in 1986 in an investigation involving generalization of the pattern for hyperoperations.[2] It is also one of the primitive functions used in the characterization of computability by recursive functions. See also • Successor ordinal • Successor cardinal • Increment and decrement operators • Sequence References 1. Halmos, Chapter 11 2. Rubtsov, C.A.; Romerio, G.F. (2004). "Ackermann's Function and New Arithmetical Operations" (PDF). • Paul R. Halmos (1968). Naive Set Theory. Nostrand. Hyperoperations Primary • Successor (0) • Addition (1) • Multiplication (2) • Exponentiation (3) • Tetration (4) • Pentation (5) Inverse for left argument • Predecessor (0) • Subtraction (1) • Division (2) • Root extraction (3) • Super-root (4) Inverse for right argument • Predecessor (0) • Subtraction (1) • Division (2) • Logarithm (3) • Super-logarithm (4) Related articles • Ackermann function • Conway chained arrow notation • Grzegorczyk hierarchy • Knuth's up-arrow notation • Steinhaus–Moser notation
Wikipedia
Successor ordinal In set theory, the successor of an ordinal number α is the smallest ordinal number greater than α. An ordinal number that is a successor is called a successor ordinal. The ordinals 1, 2, and 3 are the first three successor ordinals and the ordinals ω+1, ω+2 and ω+3 are the first three infinite successor ordinals. Properties Every ordinal other than 0 is either a successor ordinal or a limit ordinal.[1] In Von Neumann's model Using von Neumann's ordinal numbers (the standard model of the ordinals used in set theory), the successor S(α) of an ordinal number α is given by the formula[1] $S(\alpha )=\alpha \cup \{\alpha \}.$ Since the ordering on the ordinal numbers is given by α < β if and only if α ∈ β, it is immediate that there is no ordinal number between α and S(α), and it is also clear that α < S(α). Ordinal addition The successor operation can be used to define ordinal addition rigorously via transfinite recursion as follows: $\alpha +0=\alpha \!$ $\alpha +S(\beta )=S(\alpha +\beta )$ and for a limit ordinal λ $\alpha +\lambda =\bigcup _{\beta <\lambda }(\alpha +\beta )$ In particular, S(α) = α + 1. Multiplication and exponentiation are defined similarly. Topology The successor points and zero are the isolated points of the class of ordinal numbers, with respect to the order topology.[2] See also • Ordinal arithmetic • Limit ordinal • Successor cardinal References 1. Cameron, Peter J. (1999), Sets, Logic and Categories, Springer Undergraduate Mathematics Series, Springer, p. 46, ISBN 9781852330569. 2. Devlin, Keith (1993), The Joy of Sets: Fundamentals of Contemporary Set Theory, Undergraduate Texts in Mathematics, Springer, Exercise 3C, p. 100, ISBN 9780387940946.
Wikipedia
Situation calculus The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963.[1] The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation. Overview The situation calculus represents changing scenarios as a set of first-order logic formulae. The basic elements of the calculus are: • The actions that can be performed in the world • The fluents that describe the state of the world • The situations A domain is formalized by a number of formulae, namely: • Action precondition axioms, one for each action • Successor state axioms, one for each fluent • Axioms describing the world in various situations • The foundational axioms of the situation calculus A simple robot world will be modeled as a running example. In this world there is a single robot and several inanimate objects. The world is laid out according to a grid so that locations can be specified in terms of $(x,y)$ coordinate points. It is possible for the robot to move around the world, and to pick up and drop items. Some items may be too heavy for the robot to pick up, or fragile so that they break when they are dropped. The robot also has the ability to repair any broken items that it is holding. Elements The main elements of the situation calculus are the actions, fluents and the situations. A number of objects are also typically involved in the description of the world. The situation calculus is based on a sorted domain with three sorts: actions, situations, and objects, where the objects include everything that is not an action or a situation. Variables of each sort can be used. While actions, situations, and objects are elements of the domain, the fluents are modeled as either predicates or functions. Actions The actions form a sort of the domain. Variables of sort action can be used. Actions can be quantified. In the example robot world, possible action terms would be $move(x,y)$ to model the robot moving to a new location $(x,y)$, and $pickup(o)$ to model the robot picking up an object o. A special predicate Poss is used to indicate when an action is executable. Situations In the situation calculus, a dynamic world is modeled as progressing through a series of situations as a result of various actions being performed within the world. A situation represents a history of action occurrences. In the Reiter version of the situation calculus described here, a situation does not represent a state, contrarily to the literal meaning of the term and contrarily to the original definition by McCarthy and Hayes. This point has been summarized by Reiter as follows: A situation is a finite sequence of actions. Period. It's not a state, it's not a snapshot, it's a history.[2] The situation before any actions have been performed is typically denoted $S_{0}$ and called the initial situation. The new situation resulting from the performance of an action is denoted using the function symbol do (Some other references also use result). This function symbol has a situation and an action as arguments, and a situation as a result, the latter being the situation that results from performing the given action in the given situation. The fact that situations are sequences of actions and not states is enforced by an axiom stating that $do(a,s)$ is equal to $do(a',s')$ if and only if $a=a'$ and $s=s'$. This condition makes no sense if situations were states, as two different actions executed in two different states can result in the same state. In the example robot world, if the robot's first action is to move to location $(2,3)$, the first action is $move(2,3)$ and the resulting situation is $do(move(2,3),S_{0})$. If its next action is to pick up the ball, the resulting situation is $do(pickup(Ball),do(move(2,3),S_{0}))$. Situations terms like $do(move(2,3),S_{0})$ and $do(pickup(Ball),do(move(2,3),S_{0}))$ denote the sequences of executed actions, and not the description of the state that result from execution. Fluents Statements whose truth value may change are modeled by relational fluents, predicates which take a situation as their final argument. Also possible are functional fluents, functions which take a situation as their final argument and return a situation-dependent value. Fluents may be thought of as "properties of the world"'. In the example, the fluent ${\textit {isCarrying}}(o,s)$ can be used to indicate that the robot is carrying a particular object in a particular situation. If the robot initially carries nothing, ${\textit {isCarrying}}(Ball,S_{0})$ is false while ${\textit {isCarrying}}(Ball,do(pickup(Ball),S_{0}))$ is true. The location of the robot can be modeled using a functional fluent $location(s)$ which returns the location $(x,y)$ of the robot in a particular situation. Formulae The description of a dynamic world is encoded in second order logics using three kinds of formulae: formulae about actions (preconditions and effects), formulae about the state of the world, and foundational axioms. Action preconditions Some actions may not be executable in a given situation. For example, it is impossible to put down an object unless one is in fact carrying it. The restrictions on the performance of actions are modeled by literals of the form ${\textit {Poss}}(a,s)$, where a is an action, s a situation, and Poss is a special binary predicate denoting executability of actions. In the example, the condition that dropping an object is only possible when one is carrying it is modeled by: ${\textit {Poss}}(drop(o),s)\leftrightarrow {\textit {isCarrying}}(o,s)$ As a more complex example, the following models that the robot can carry only one object at a time, and that some objects are too heavy for the robot to lift (indicated by the predicate heavy): ${\textit {Poss}}(pickup(o),s)\leftrightarrow (\forall z\ \neg {\textit {isCarrying}}(z,s))\wedge \neg heavy(o)$ Action effects Given that an action is possible in a situation, one must specify the effects of that action on the fluents. This is done by the effect axioms. For example, the fact that picking up an object causes the robot to be carrying it can be modeled as: $Poss(pickup(o),s)\rightarrow {\textit {isCarrying}}(o,do(pickup(o),s))$ It is also possible to specify conditional effects, which are effects that depend on the current state. The following models that some objects are fragile (indicated by the predicate fragile) and dropping them causes them to be broken (indicated by the fluent broken): $Poss(drop(o),s)\wedge fragile(o)\rightarrow broken(o,do(drop(o),s))$ While this formula correctly describes the effect of the actions, it is not sufficient to correctly describe the action in logic, because of the frame problem. The frame problem While the above formulae seem suitable for reasoning about the effects of actions, they have a critical weakness - they cannot be used to derive the non-effects of actions. For example, it is not possible to deduce that after picking up an object, the robot's location remains unchanged. This requires a so-called frame axiom, a formula like: $Poss(pickup(o),s)\wedge location(s)=(x,y)\rightarrow location(do(pickup(o),s))=(x,y)$ The need to specify frame axioms has long been recognised as a problem in axiomatizing dynamic worlds, and is known as the frame problem. As there are generally a very large number of such axioms, it is very easy for the designer to leave out a necessary frame axiom, or to forget to modify all appropriate axioms when a change to the world description is made. The successor state axioms The successor state axioms "solve" the frame problem in the situation calculus. According to this solution, the designer must enumerate as effect axioms all the ways in which the value of a particular fluent can be changed. The effect axioms affecting the value of fluent $F({\overrightarrow {x}},s)$ can be written in generalised form as a positive and a negative effect axiom: $Poss(a,s)\wedge \gamma _{F}^{+}({\overrightarrow {x}},a,s)\rightarrow F({\overrightarrow {x}},do(a,s))$ $Poss(a,s)\wedge \gamma _{F}^{-}({\overrightarrow {x}},a,s)\rightarrow \neg F({\overrightarrow {x}},do(a,s))$ The formula $\gamma _{F}^{+}$ describes the conditions under which action a in situation s makes the fluent F become true in the successor situation $do(a,s)$. Likewise, $\gamma _{F}^{-}$ describes the conditions under which performing action a in situation s makes fluent F false in the successor situation. If this pair of axioms describe all the ways in which fluent F can change value, they can be rewritten as a single axiom: $Poss(a,s)\rightarrow \left[F({\overrightarrow {x}},do(a,s))\leftrightarrow \gamma _{F}^{+}({\overrightarrow {x}},a,s)\vee \left(F({\overrightarrow {x}},s)\wedge \neg \gamma _{F}^{-}({\overrightarrow {x}},a,s)\right)\right]$ In words, this formula states: "given that it is possible to perform action a in situation s, the fluent F would be true in the resulting situation $do(a,s)$ if and only if performing a in s would make it true, or it is true in situation s and performing a in s would not make it false." By way of example, the value of the fluent broken introduced above is given by the following successor state axiom: $Poss(a,s)\rightarrow \left[broken(o,do(a,s))\leftrightarrow a=drop(o)\wedge fragile(o)\vee broken(o,s)\wedge a\neq repair(o)\right]$ States The properties of the initial or any other situation can be specified by simply stating them as formulae. For example, a fact about the initial state is formalized by making assertions about $S_{0}$ (which is not a state, but a situation). The following statements model that initially, the robot carries nothing, is at location $(0,0)$, and there are no broken objects: $\forall z\,\neg {\textit {isCarrying}}(z,S_{0})$ $location(S_{0})=(0,0)\,$ $\forall o\,\neg broken(o,S_{0})$ Foundational axioms The foundational axioms of the situation calculus formalize the idea that situations are histories by having $do(a,s)=do(a',s')\iff a=a'\land s=s'$. They also include other properties such as the second order induction on situations. Regression Regression is a mechanism for proving consequences in the situation calculus. It is based on expressing a formula containing the situation $do(a,s)$ in terms of a formula containing the action a and the situation s, but not the situation $do(a,s)$. By iterating this procedure, one can end up with an equivalent formula containing only the initial situation S_0. Proving consequences is supposedly simpler from this formula than from the original one. GOLOG GOLOG is a logic programming language based on the situation calculus.[3][4] The original version of the situation calculus The main difference between the original situation calculus by McCarthy and Hayes and the one in use today is the interpretation of situations. In the modern version of the situational calculus, a situation is a sequence of actions. Originally, situations were defined as "the complete state of the universe at an instant of time". It was clear from the beginning that such situations could not be completely described; the idea was simply to give some statements about situations, and derive consequences from them. This is also different from the approach that is taken by the fluent calculus, where a state can be a collection of known facts, that is, a possibly incomplete description of the universe. In the original version of the situation calculus, fluents are not reified. In other words, conditions that can change are represented by predicates and not by functions. Actually, McCarthy and Hayes defined a fluent as a function that depends on the situation, but they then proceeded always using predicates to represent fluents. For example, the fact that it is raining at place x in the situation s is represented by the literal $raining(x,s)$. In the 1986 version of the situation calculus by McCarthy, functional fluents are used. For example, the position of an object x in the situation s is represented by the value of $location(x,s)$, where location is a function. Statements about such functions can be given using equality: $location(x,s)=location(x,s')$ means that the location of the object x is the same in the two situations s and $s'$. The execution of actions is represented by the function result: the execution of the action a in the situation s is the situation ${\textit {result}}(a,s)$. The effects of actions are expressed by formulae relating fluents in situation s and fluents in situations ${\textit {result}}(a,s)$. For example, that the action of opening the door results in the door being open if not locked is represented by: $\neg locked(door,s)\rightarrow open(door,{\textit {result}}(opens,s))$ The predicates locked and open represent the conditions of a door being locked and open, respectively. Since these condition may vary, they are represented by predicates with a situation argument. The formula says that if the door is not locked in a situation, then the door is open after executing the action of opening, this action being represented by the constant opens. These formulae are not sufficient to derive everything that is considered plausible. Indeed, fluents at different situations are only related if they are preconditions and effects of actions; if a fluent is not affected by an action, there is no way to deduce it did not change. For example, the formula above does not imply that $\neg locked(door,{\textit {result}}(opens,s))$ follows from $\neg locked(door,s)$, which is what one would expect (the door is not made locked by opening it). In order for inertia to hold, formulae called frame axioms are needed. These formulae specify all non-effects of actions: $\neg locked(door,s)\rightarrow \neg locked(door,{\textit {result}}(opens,s))$ In the original formulation of the situation calculus, the initial situation, later denoted by $S_{0}$, is not explicitly identified. The initial situation is not needed if situations are taken to be descriptions of the world. For example, to represent the scenario in which the door was closed but not locked and the action of opening it is performed is formalized by taking a constant s to mean the initial situation and making statements about it (e.g., $\neg locked(door,s)$). That the door is open after the change is reflected by formula $open(door,{\textit {result}}(opens,s))$ being entailed. The initial situation is instead necessary if, like in the modern situation calculus, a situation is taken to be a history of actions, as the initial situation represents the empty sequence of actions. The version of the situation calculus introduced by McCarthy in 1986 differs to the original one for the use of functional fluents (e.g., $location(x,s)$ is a term representing the position of x in the situation s) and for an attempt to use circumscription to replace the frame axioms. The situation calculus as a logic program It is also possible (e.g. Kowalski 1979, Apt and Bezem 1990, Shanahan 1997) to write the situation calculus as a logic program: ${\textit {Holds}}(f,do(a,s))\leftarrow {\textit {Poss}}(a,s)\wedge {\textit {Initiates}}(a,f,s)$ ${\textit {Holds}}(f,do(a,s))\leftarrow {\textit {Poss}}(a,s)\wedge {\textit {Holds}}(f,s)\wedge \neg {\textit {Terminates}}(a,f,s)$ Here Holds is a meta-predicate and the variable f ranges over fluents. The predicates Poss, Initiates and Terminates correspond to the predicates Poss, $\gamma _{F}^{+}({\overrightarrow {x}},a,s)$, and $\gamma _{F}^{-}({\overrightarrow {x}},a,s)$ respectively. The left arrow ← is half of the equivalence ↔. The other half is implicit in the completion of the program, in which negation is interpreted as negation as failure. Induction axioms are also implicit, and are needed only to prove program properties. Backward reasoning as in SLD resolution, which is the usual mechanism used to execute logic programs, implements regression implicitly. See also • Frame problem • Event calculus References 1. McCarthy, John (1963). "Situations, actions and causal laws" (PDF). Stanford University Technical Report. Archived from the original (PDF) on March 21, 2020. 2. "ECSTER Debate Contribution". 3. Lakemeyer, Gerhard. "The Situation Calculus and Golog: A Tutorial" (PDF). www.hybrid-reasoning.org. Retrieved 16 July 2014. 4. "Publications about GOLOG". Retrieved 16 July 2014. • J. McCarthy and P. Hayes (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence, 4:463–502. Edinburgh University Press, 1969. • R. Kowalski (1979). Logic for Problem Solving - Elsevier North Holland. • K.R. Apt and M. Bezem (1990). Acyclic Programs. In: 7th International Conference on Logic Programming. MIT Press. Jerusalem, Israel. • R. Reiter (1991). The frame problem in the situation calculus: a simple solution (sometimes) and a completeness result for goal regression. In Vladimir Lifshitz, editor, Artificial intelligence and mathematical theory of computation: papers in honour of John McCarthy, pages 359–380, San Diego, CA, USA. Academic Press Professional, Inc. 1991. • M. Shanahan (1997). Solving the Frame Problem: a Mathematical Investigation of the Common Sense Law of Inertia. MIT Press. • H. Levesque, F. Pirri, and R. Reiter (1998). Foundations for the situation calculus. Electronic Transactions on Artificial Intelligence, 2(3–4):159-178. • F. Pirri and R. Reiter (1999). Some contributions to the metatheory of the Situation Calculus. Journal of the ACM, 46(3):325–361. doi:10.1145/316542.316545 • R. Reiter (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. The MIT Press. John McCarthy • Artificial intelligence • Circumscription • Dartmouth workshop • Frame problem • Garbage collection • Lisp • ALGOL 60 • McCarthy Formalism • McCarthy 91 function • Situation calculus • Space fountain
Wikipedia
Set-builder notation In set theory and its applications to logic, mathematics, and computer science, set-builder notation is a mathematical notation for describing a set by enumerating its elements, or stating the properties that its members must satisfy.[1] $\{n\mid \exists k\in \mathbb {Z} ,n=2k\}$ The set of all even integers, expressed in set-builder notation. Defining sets by properties is also known as set comprehension, set abstraction or as defining a set's intension. Sets defined by enumeration Main article: Set (mathematics) § Roster notation A set can be described directly by enumerating all of its elements between curly brackets, as in the following two examples: • $\{7,3,15,31\}$ is the set containing the four numbers 3, 7, 15, and 31, and nothing else. • $\{a,c,b\}=\{a,b,c\}$ is the set containing a, b, and c, and nothing else (there is no order among the elements of a set). This is sometimes called the "roster method" for specifying a set.[2] When it is desired to denote a set that contains elements from a regular sequence, an ellipsis notation may be employed, as shown in the next examples: • $\{1,2,3,\ldots ,100\}$ is the set of integers between 1 and 100 inclusive. • $\{1,2,3,\ldots \}$ is the set of natural numbers. • $\{\ldots ,-2,-1,0,1,2,\ldots \}=\{0,1,-1,2,-2,\ldots \}$ is the set of all integers. There is no order among the elements of a set (this explains and validates the equality of the last example), but with the ellipses notation, we use an ordered sequence before (or after) the ellipsis as a convenient notational vehicle for explaining which elements are in a set. The first few elements of the sequence are shown, then the ellipses indicate that the simplest interpretation should be applied for continuing the sequence. Should no terminating value appear to the right of the ellipses, then the sequence is considered to be unbounded. In general, $\{1,\dots ,n\}$ denotes the set of all natural numbers $i$ such that $1\leq i\leq n$. Another notation for $\{1,\dots ,n\}$ is the bracket notation $[n]$. A subtle special case is $n=0$, in which $[0]=\{1,\dots ,0\}$ is equal to the empty set $\emptyset $. Similarly, $\{a_{1},\dots ,a_{n}\}$ denotes the set of all $a_{i}$ for $1\leq i\leq n$. In each preceding example, each set is described by enumerating its elements. Not all sets can be described in this way, or if they can, their enumeration may be too long or too complicated to be useful. Therefore, many sets are defined by a property that characterizes their elements. This characterization may be done informally using general prose, as in the following example. • $\{$ addresses on Pine Street $\}$ is the set of all addresses on Pine Street. However, the prose approach may lack accuracy or be ambiguous. Thus, set-builder notation is often used with a predicate characterizing the elements of the set being defined, as described in the following section. Sets defined by a predicate Set-builder notation can be used to describe a set that is defined by a predicate, that is, a logical formula that evaluates to true for an element of the set, and false otherwise.[3] In this form, set-builder notation has three parts: a variable, a colon or vertical bar separator, and a predicate. Thus there is a variable on the left of the separator, and a rule on the right of it. These three parts are contained in curly brackets: $\{x\mid \Phi (x)\}$ or $\{x:\Phi (x)\}.$ The vertical bar (or colon) is a separator that can be read as "such that", "for which", or "with the property that". The formula Φ(x) is said to be the rule or the predicate. All values of x for which the predicate holds (is true) belong to the set being defined. All values of x for which the predicate does not hold do not belong to the set. Thus $\{x\mid \Phi (x)\}$ is the set of all values of x that satisfy the formula Φ.[4] It may be the empty set, if no value of x satisfies the formula. Specifying the domain A domain E can appear on the left of the vertical bar:[5] $\{x\in E\mid \Phi (x)\},$ or by adjoining it to the predicate: $\{x\mid x\in E{\text{ and }}\Phi (x)\}\quad {\text{or}}\quad \{x\mid x\in E\land \Phi (x)\}.$ The ∈ symbol here denotes set membership, while the $\land $ symbol denotes the logical "and" operator, known as logical conjunction. This notation represents the set of all values of x that belong to some given set E for which the predicate is true (see "Set existence axiom" below). If $\Phi (x)$ is a conjunction $\Phi _{1}(x)\land \Phi _{2}(x)$, then $\{x\in E\mid \Phi (x)\}$ is sometimes written $\{x\in E\mid \Phi _{1}(x),\Phi _{2}(x)\}$, using a comma instead of the symbol $\land $. In general, it is not a good idea to consider sets without defining a domain of discourse, as this would represent the subset of all possible things that may exist for which the predicate is true. This can easily lead to contradictions and paradoxes. For example, Russell's paradox shows that the expression $\{x~|~x\not \in x\},$ although seemingly well formed as a set builder expression, cannot define a set without producing a contradiction.[6] In cases where the set E is clear from context, it may be not explicitly specified. It is common in the literature for an author to state the domain ahead of time, and then not specify it in the set-builder notation. For example, an author may say something such as, "Unless otherwise stated, variables are to be taken to be natural numbers," though in less formal contexts where the domain can be assumed, a written mention is often unnecessary. Examples The following examples illustrate particular sets defined by set-builder notation via predicates. In each case, the domain is specified on the left side of the vertical bar, while the rule is specified on the right side. • $\{x\in \mathbb {R} \mid x>0\}$ is the set of all strictly positive real numbers, which can be written in interval notation as $(0,\infty )$. • $\{x\in \mathbb {R} \mid |x|=1\}$ is the set $\{-1,1\}$. This set can also be defined as $\{x\in \mathbb {R} \mid x^{2}=1\}$; see equivalent predicates yield equal sets below. • For each integer m, we can define $G_{m}=\{x\in \mathbb {Z} \mid x\geq m\}=\{m,m+1,m+2,\ldots \}$. As an example, $G_{3}=\{x\in \mathbb {Z} \mid x\geq 3\}=\{3,4,5,\ldots \}$ and $G_{-2}=\{-2,-1,0,\ldots \}$. • $\{(x,y)\in \mathbb {R} \times \mathbb {R} \mid 0<y<f(x)\}$ is the set of pairs of real numbers such that y is greater than 0 and less than f(x), for a given function f. Here the cartesian product $\mathbb {R} \times \mathbb {R} $ denotes the set of ordered pairs of real numbers. • $\{n\in \mathbb {N} \mid (\exists k)[k\in \mathbb {N} \land n=2k]\}$ is the set of all even natural numbers. The $\land $ sign stands for "and", which is known as logical conjunction. The ∃ sign stands for "there exists", which is known as existential quantification. So for example, $(\exists x)P(x)$ is read as "there exists an x such that P(x)". • $\{n\mid (\exists k\in \mathbb {N} )[n=2k]\}$ is a notational variant for the same set of even natural numbers. It is not necessary to specify that n is a natural number, as this is implied by the formula on the right. • $\{a\in \mathbb {R} \mid (\exists p\in \mathbb {Z} )(\exists q\in \mathbb {Z} )[q\not =0\land aq=p]\}$ is the set of rational numbers; that is, real numbers that can be written as the ratio of two integers. More complex expressions on the left side of the notation An extension of set-builder notation replaces the single variable x with an expression. So instead of $\{x\mid \Phi (x)\}$, we may have $\{f(x)\mid \Phi (x)\},$ which should be read $\{f(x)\mid \Phi (x)\}=\{y\mid \exists x(y=f(x)\wedge \Phi (x))\}$. For example: • $\{2n\mid n\in \mathbb {N} \}$, where $\mathbb {N} $ is the set of all natural numbers, is the set of all even natural numbers. • $\{p/q\mid p,q\in \mathbb {Z} ,q\not =0\}$, where $\mathbb {Z} $ is the set of all integers, is $\mathbb {Q} ,$ the set of all rational numbers. • $\{2t+1\mid t\in \mathbb {Z} \}$ is the set of odd integers. • $\{(t,2t+1)\mid t\in \mathbb {Z} \}$ creates a set of pairs, where each pair puts an integer into correspondence with an odd integer. When inverse functions can be explicitly stated, the expression on the left can be eliminated through simple substitution. Consider the example set $\{2t+1\mid t\in \mathbb {Z} \}$. Make the substitution $u=2t+1$, which is to say $t=(u-1)/2$, then replace t in the set builder notation to find $\{2t+1\mid t\in \mathbb {Z} \}=\{u\mid (u-1)/2\in \mathbb {Z} \}.$ Equivalent predicates yield equal sets Two sets are equal if and only if they have the same elements. Sets defined by set builder notation are equal if and only if their set builder rules, including the domain specifiers, are equivalent. That is $\{x\in A\mid P(x)\}=\{x\in B\mid Q(x)\}$ if and only if $(\forall t)[(t\in A\land P(t))\Leftrightarrow (t\in B\land Q(t))]$. Therefore, in order to prove the equality of two sets defined by set builder notation, it suffices to prove the equivalence of their predicates, including the domain qualifiers. For example, $\{x\in \mathbb {R} \mid x^{2}=1\}=\{x\in \mathbb {Q} \mid |x|=1\}$ because the two rule predicates are logically equivalent: $(x\in \mathbb {R} \land x^{2}=1)\Leftrightarrow (x\in \mathbb {Q} \land |x|=1).$ This equivalence holds because, for any real number x, we have $x^{2}=1$ if and only if x is a rational number with $|x|=1$. In particular, both sets are equal to the set $\{-1,1\}$. Set existence axiom In many formal set theories, such as Zermelo–Fraenkel set theory, set builder notation is not part of the formal syntax of the theory. Instead, there is a set existence axiom scheme, which states that if E is a set and Φ(x) is a formula in the language of set theory, then there is a set Y whose members are exactly the elements of E that satisfy Φ: $(\forall E)(\exists Y)(\forall x)[x\in Y\Leftrightarrow x\in E\land \Phi (x)].$ The set Y obtained from this axiom is exactly the set described in set builder notation as $\{x\in E\mid \Phi (x)\}$. In programming languages A similar notation available in a number of programming languages (notably Python and Haskell) is the list comprehension, which combines map and filter operations over one or more lists. In Python, the set-builder's braces are replaced with square brackets, parentheses, or curly braces, giving list, generator, and set objects, respectively. Python uses an English-based syntax. Haskell replaces the set-builder's braces with square brackets and uses symbols, including the standard set-builder vertical bar. The same can be achieved in Scala using Sequence Comprehensions, where the "for" keyword returns a list of the yielded variables using the "yield" keyword.[7] Consider these set-builder notation examples in some programming languages: Example 1Example 2 Set-builder $\{l\ |\ l\in L\}$$\{(k,x)\ |\ k\in K\wedge x\in X\wedge P(x)\}$ Python {l for l in L} {(k, x) for k in K for x in X if P(x)} Haskell [l | l <- ls] [(k, x) | k <- ks, x <- xs, p x] Scala for (l <- L) yield l for (k <- K; x <- X if P(x)) yield (k,x) C# from l in L select l from k in K from x in X where P(x) select (k,x) SQL SELECT l FROM L_set SELECT k, x FROM K_set, X_set WHERE P(x) Prolog setof(L,member(L,Ls),Result) setof((K,X),(member(K,Ks),member(X,Xs),call(P,X)),Result) Ruby L.map{|l| l} K.product(X).select{|k,x| P(x) } Erlang [l || l <- ls] Julia [l for l ∈ L] [(k, x) for k ∈ K for x ∈ X if P(x)] The set builder notation and list comprehension notation are both instances of a more general notation known as monad comprehensions, which permits map/filter-like operations over any monad with a zero element. See also • Glossary of set theory Notes 1. Rosen, Kenneth (2007). Discrete Mathematics and its Applications (6th ed.). New York, NY: McGraw-Hill. pp. 111–112. ISBN 978-0-07-288008-3. 2. Richard Aufmann, Vernon C. Barker, and Joanne Lockwood, 2007, Intermediate Algebra with Applications, Brooks Cole, p. 6. 3. Michael J Cullinan, 2012, A Transition to Mathematics with Proofs, Jones & Bartlett, pp. 44ff. 4. Weisstein, Eric W. "Set". mathworld.wolfram.com. Retrieved 20 August 2020. 5. "Set-Builder Notation". mathsisfun.com. Retrieved 20 August 2020. 6. Irvine, Andrew David; Deutsch, Harry (9 October 2016) [1995]. "Russell's Paradox". Stanford Encyclopedia of Philosophy. Retrieved 6 August 2017. 7. "Sequence Comprehensions". Scala. Retrieved 6 August 2017. Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo
Wikipedia
Sucharit Sarkar Sucharit Sarkar (born 1983) is an Indian topologist and professor of mathematics at the University of California, Los Angeles who works in low-dimensional topology. Sucharit Sarkar Born1983 (age 39–40) Calcutta, India Alma mater • Princeton University (Ph.D.) • Indian Statistical Institute (Bachelor) Scientific career FieldsMathematics InstitutionsUniversity of California, Los Angeles ThesisTopics in Heegaard Floer homology (2009) Doctoral advisorZoltán Szabó Websitehttps://www.math.ucla.edu/~sucharit/ Education and career Sarkar attended secondary school at South Point High School in his hometown, Calcutta, India. In the International Mathematical Olympiads in 2001 and 2002, he received gold and silver medals respectively.[1] He completed his Bachelor of Mathematics degree from the Indian Statistical Institute, Bangalore in 2005. Sarkar received his Ph.D. from Princeton University in 2009 under the guidance of Zoltán Szabó.[2] He went on to postdoctoral fellowships at the Mathematical Sciences Research Institute and Columbia University, before becoming an assistant professor at Princeton University in 2012. In 2016 he moved to the University of California, Los Angeles.[3] Sarkar's research area is low-dimensional topology, with particular interests in knot theory, Heegaard Floer homology, and Khovanov homology. Awards and honors • Sarkar was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro in 2018.[4] • Sarkar was a Clay Research Fellow from 2009 until 2013.[5] References 1. Official IMO Website 2. Sucharit Sarkar at the Mathematics Genealogy Project 3. "Sucharit Sarkar's curriculum vita" (PDF). Retrieved September 24, 2019. 4. "ICM 2018 List of Speakers". Retrieved September 24, 2019. 5. "Past Clay Research Fellows". Retrieved September 24, 2019. External links • Official website Authority control International • ISNI • VIAF National • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Logic Masters India Logic Masters India (commonly abbreviated as 'LMI') is the Indian representative of the World Puzzle Federation (WPF) which is responsible for conducting national sudoku championships since 2008 to select the Indian team for the world championships. It also aims in organizing various sudoku and puzzle activities in India.[1] There are three main contest types: Sudoku Mahabharat, Puzzle Ramayan and Daily Puzzle Test. Sudoku Mahabharat & Puzzle Ramayan Each year, the Sudoku Mahabharat (SM) championship consists of 4 online rounds while the Puzzle Ramayan (PR) consists of 6 (approximately one every 2-3 weeks), based on different categories and themes of Sudoku variants (Puzzle types). The online rounds are open to all participants including international solvers. There are 3-4 spots for the Indian A team for the World Sudoku Championship (WSC) and World Puzzle Championship (WPC), which will be decided during the offline finals of the tournaments. Each Sudoku Mahabharat round consists of six Standard Sudokus (two 6X6, four 9X9) and six Sudoku Variants (each variant will appear in both sizes, i. e. 6X6 and 9X9).[2] Each Puzzle Ramayan round consists of three puzzles each in 6 puzzle types within the theme (in double theme rounds, three puzzles each in 3 puzzle types within each theme) and two exploratory variant puzzles in two types within the original theme (in double theme rounds, 2 variant puzzles in one type within each theme).[3] Participants will get points as allotted for each sudoku correctly submitted. The test uses instant grading where the solver can confirm if the solution is correct immediately after submitting. Each incorrect submission reduces the puzzle's potential score. The first, second, third, and fourth incorrect submissions reduce the potential score to 90%, 70%, 40%, and 0% respectively.[4] Sudoku Mahabharat 2022 Round Author Dates[5] India 1 India 2 India 3 Standard & Neighbours R. Kumaresan 25 Feb-02 Mar 2022 Rohan Rao James Peter Nityant Agarwal Odd Even & Hybrids Madhav S. & Arun I. 01 - 06 Apr 2022 Kishore Kumar Manjiri James Peter Converse & Outside Harmeet S. & Dhruvarajsinh P. 22 - 27 Apr 2022 Kishore Kumar James Peter Jaipal Reddy Math & Irregular Nityant A. & James P. 27 May - 01 Jun 2022 Rohan Rao Kishore Kumar Ashish Kumar [6] Sudoku Mahabharat 2023 Round Author Dates[7] India 1 India 2 India 3 Standard & Odd Even R. Kumaresan, Hemant Malani, Arun Iyer 20-26 Jan 2023 James Peter Rohan Rao Ashish Kumar Outside & Hybrids Nityant Agarwal, Akash Doulani, Puwar Dhruvarajsinh 10-16 Feb 2023 Rohan Rao Kishore Kumar Ashish Kumar Math & Irregular James Peter, Madhav Sankaranarayanan, Priyam Bhushan 03-09 Mar 2023 Rohan Rao Kishore Kumar Nityant Agarwal Neighbours & Converse Ashish Kumar, Kishore Sridharan, Pranav Kamesh S 24-30 Mar 2023 James Peter Manjiri Harsh Poddar [2] Puzzle Ramayan 2022 RoundAuthorDates[5]India 1India 2India 3 ClassicsPrasanna Seshadri11 - 16 Feb 2022Swaroop GuggilamAshish KumarRajesh Kumar Casual & WordAmit Sowani18 - 23 Mar 2022Rakesh RaiRohan RaoSwaroop Guggilam Regions & EvergreensAshish Kumar8 - 13 Apr 2022Swaroop GuggilamKishore KumarRakesh Rai Number Placement & Puzz.linkNityant Agarwal06 - 11 May 2022Ashish KumarRohan RaoSwaroop Guggilam Loops & ShadingMadhav Sankaranarayanan03 - 08 Jun 2022Rohan RaoAshish KumarNityant Agarwal MII & Object PlacementPriyam Bhushan & Chandrachud Nanduri17 - 22 Jun 2022Rohan RaoAshish KumarNityant Agarwal [8] Puzzle Ramayan 2023 RoundAuthorDates[7]India 1India 2India 3 ClassicsPrasanna Seshadri13-19 Jan 2023 Evergreens & Rule PoolChandrachud Nanduri03-09 Feb 2023 Shading & Made In IndiaAshish Kumar & Pranav Kamesh S24 Feb – 02 Mar 2023 Object Placement & Number PlacementPriyam Bhushan17-23 Mar 2023 Casual & WordMadhav Sankaranarayanan07-13 Apr 2023 Regions & LoopsNityant Agarwal21-27 Apr 2023 [3] Daily Puzzle Test This contest type is online only and consists of 16 puzzles of the same type. Every puzzle will be live for 30 hours. Cumulative points from each day's solving will determine the final scores.[9] See also • Sudoku • Puzzle • World Sudoku Championship • World Puzzle Championship External links • Official website References 1. "About Logic Masters India". Logic Masters India. Retrieved 28 January 2022. 2. "Sudoku Mahabharat 2023 at Logic Masters India". logicmastersindia.com. Retrieved 2023-01-30. 3. "Puzzle Ramayan 2023 at Logic Masters India". logicmastersindia.com. Retrieved 2023-01-30. 4. "SM 2023 R1 - Standard & Odd Even - Instruction Booklet". Logic Masters India. Retrieved 31 January 2023. 5. All rounds will start some time on Friday and will be open till Wednesdays; All times in Indian Standard Time (GMT+05:30) 6. "Sudoku Mahabharat 2022 at Logic Masters India". logicmastersindia.com. Retrieved 2023-01-30. 7. All rounds will start some time on Friday and will be open till Thursdays; All times in Indian Standard Time (GMT+05:30) 8. "Puzzle Ramayan 2022 at Logic Masters India". logicmastersindia.com. Retrieved 2023-01-30. 9. "Nurimisaki Nomad (17th Sep to 7th Oct)". logicmastersindia.com. Retrieved 2023-01-31.
Wikipedia
Sudoku code Sudoku codes are non-linear forward error correcting codes following rules of sudoku puzzles designed for an erasure channel. Based on this model, the transmitter sends a sequence of all symbols of a solved sudoku. The receiver either receives a symbol correctly or an erasure symbol to indicate that the symbol was not received. The decoder gets a matrix with missing entries and uses the constraints of sudoku puzzles to reconstruct a limited amount of erased symbols. Sudoku codes are not suitable for practical usage but are subject of research. Questions like the rate and error performance are still unknown for general dimensions.[1] In a sudoku one can find missing information by using different techniques to reproduce the full puzzle. This method can be seen as decoding a sudoku coded message that is sent over an erasure channel where some symbols got erased. By using the sudoku rules the decoder can recover the missing information. Sudokus can be modeled as a probabilistic graphical model and thus methods from decoding low-density parity-check codes like belief propagation can be used. Erasure channel model In the erasure channel model a symbol gets either transmitted correctly with probability $1-p_{e}$ or is erased with probability $p_{e}$ (see Figure \ref{fig:Sudoku3x3channel}). The channel introduces no errors, i.e. no channel input is changed to another symbol. The example in Figure \ref{fig:Sudoku3x3BSC} shows the transmission of a $3\times 3$ Sudoku code. 5 of the 9 symbols got erased by the channel. The decoder is still able to reconstruct the message, i.e. the whole puzzle. Note that the symbols sent over the channel are not binary. For a binary channel the symbols (e.g. integers $\{1,\ldots ,9\}$) have to be mapped onto base 2. The binary erasure channel model however is not applicable because it erases only individual bits with some probability and not Sudoku symbols. If the symbols of the Sudoku are sent in packets the channel can be described as a packet erasure channel model. Puzzle description A sudoku is a $N\times N$ number-placement puzzle. It is filled in a way, that in each column, row and sub-grid N distinct symbols occur exactly once. Typical alphabet is the set of the integers $\{1,\ldots ,N\}$. The size of the sub-grids limit the size of SUDOKUs to $N=n^{2}$ with $n\in \mathrm {N} $. Every solved sudoku and every sub-grid of it is a Latin square, meaning every symbol occurs exactly once in each row and column. At the starting point (in this case after the erasure channel) the puzzle is only partially complete but has only one unique solution. For channel codes also other varieties of sudokus are conceivable. Diagonal regions instead of square sub-grids can be used for performance investigations.[2] The diagonal sudoku has the advantage, that its size can be chosen more freely. Due to the sub-grid structure normal sudokus can only be of size n², diagonal sudokus have valid solutions for all odd $N$.[2] Sudoku codes are non-linear. In a linear code any linear combination of codewords give a new valid codeword, this does not hold for sudoku codes. The symbols of a sudoku are from a finite alphabet (e.g. integers $\{1,\ldots ,9\}$). The constraints of Sudoku codes are non-linear: all symbols within a constraint (row, line, sub-grid) must be different from any other symbol within this constraint. Hence there is no all-zero codeword in Sudoku codes. Sudoku codes can be represented by probabilistic graphical model in which they take the form of a low-density parity-check code.[3] Decoding with belief propagation There are several possible decoding methods for sudoku codes. Some algorithms are very specific developments for Sudoku codes. Several methods are described in sudoku solving algorithms. Another efficient method is with dancing links. Decoding methods like belief propagation are also used for low-density parity-check codes are of special interest. Performance analysis of these methods on sudoku codes can help to understand decoding problems for low-density parity-check codes better.[3] By modeling sudoku codes as a probabilistic graphical model belief propagation can be used for Sudoku codes. Belief propagation on the tanner graph or factor graph to decode Sudoku codes is discussed in by Sayir.[1] and Moon[4] This method is originally designed for low-density parity-check codes. Due to its generality belief propagation works not only with the classical $9\times 9$ Sudoku but with a variety of those. LDPC decoding is a common use-case for belief propagation, with slight modifications this approach can be used for solving Sudoku codes.[4] The constraint satisfaction using a tanner graph is shown in the figure on the right. $S_{n}$ denotes the entries of the sudoku in row-scan order. $C_{m}$ denotes the constraint functions: $m=1,...,9$ associated with rows, $m=10,...,18$ associated with columns and $m=19,...,27$ associated with the $3\times 3$ sub-grids of the Sudoku. $C_{m}$ is defined as $C_{m}(s_{1},s_{2},.....,s_{9})={\begin{cases}1,&{\text{if }}s_{1},s_{2},...,s_{9}{\text{ are distinct}}\\0,&{\text{otherwise.}}\end{cases}}$[4] Every cell $S_{n}$ is connected to 3 constraints: the row, column and sub-grid constraints. A specification of the general approach for belief propagation is suggested by Sayir:[1] The initial probability of a received symbol is either 1 to the observed symbol and 0 to all others or uniform distributed on the whole alphabet if the symbol is erased. For the belief propagation algorithm it is sufficient to transmit only a subset of possibilities instead of distributions, since the distribution is always uniform over the subset. The candidates for the erased symbols narrow down to a subset of the alphabet as symbols get excluded due to constraints. All values that are used by another cell in the constraint, and pairs that are shared among two other cells and so on are eliminated. Sudoku players use this method of logic excluding to solve most sudoku puzzles. Encoding The aim of error-correcting codes is to encode data in a way such that it is more resilient to errors in the transmission process. The encoder has to map data $U$ to a valid sudoku grid from which the codeword $X$ can we taken e.g. in row-scan order. $U=00101\ldots {\stackrel {\text{Encoder}}{\longrightarrow }}{\begin{array}{|c|c|c|}\hline 1&2&3\\\hline 2&3&1\\\hline 3&1&2\\\hline \end{array}}\Rightarrow X=1,2,3,1,2,1,3,1,2$ shows the necessary steps. A standard $9\times 9$ sudoku has about 72.5 bits of information as calculated in the next section. Information after Shannon is the degree of randomness in a set of data. An ideal coin toss for example has an information of$I=\log _{2}2=1$ bit. To represent the outcome of 72 coin tosses 72 bits are necessary. One Sudoku contains therewith about the same information as 72 coin tosses or a sequence of 72 bits. A sequence of 81 random symbols $\{1,\ldots ,9\}$ has $I=81\log _{2}9\approx 256.8$ bits of information. One Sudoku code can be seen as 72.5 bits of information and 184.3 bits redundancy. Theoretically a string of 72 bits can be mapped to one sudoku that is sent over the channel as a string of 81 symbols. However, there is no linear function that maps a string to a sudoku code. A suggested encoding approach by Sayir[5] is as follows: • Start with an empty grid • Do the following for all entries sequentially • Use belief propagation to determine all valid symbols for the entry • If the cardinality of valid symbols is k>1 then convert the source randomness into a k-ary symbol and use it in the cell For a $4\times 4$ sudoku the first entry can be filled with a source of cardinality 4. In this example this a 1. For the rest of this row, column and $2\times 2$ sub-grid this number is excluded from the possibilities in the belief propagation decoder. For the second cell only the numbers 2,3,4 are valid. The source has to be transformed into a uniform distribution between three possibilities and mapped to the valid numbers and so on, until the grid is filled. Performance of sudoku codes The calculation of the rate of sudoku codes is not trivial. An example rate calculation of a $4\times 4$ sudoku is shown above. Filling line by line from the top left corner only the first entry has maximum information of $\log _{2}4=2$ bits. Every next entry cannot be any of the numbers used before, so the information reduces to $\log _{2}3$, $\log _{2}2$ and $0$ for the following entries, as they have to be of the remaining left numbers. In the second lines the information is additionally reduced by the area rule: cell $5$ in row-scan order can only be a $3$ or $4$ as the numbers $1$ and $2$ are already used in the sub-grid. The last row contains no information at all. Adding all information up one gets $\log(4*3*2^{5})\approx 8.58$ bits. The rate in this example is ${\frac {4+3+2*5}{16*4}}\approx 0.27$ . The exact number of possible Sudoku grids according to Mathematics of Sudoku is $6,670,903,752,021,072,936,960$. With the total information of ${\begin{aligned}I_{log_{9}}&=\log _{9}6.67*10^{21}\approx 22.87\\I_{log_{2}}&=\log _{2}6.67*10^{21}\approx 72.50\,{\text{bits}}\end{aligned}}$ the average rate of a standard Sudoku is $R=I_{9}/9^{2}\approx 0.28$ . The average number of possible entries for a cell is ${\sqrt[{81}]{6.67*10^{21}}}\approx 1.86$ or $\log _{2}{1.86}\approx 0.90\,{\text{bits}}$ of information per Sudoku cell. Note that the rate may vary between codewords.[5] The minimum number of given entries that render a unique solution was proven to be 17.[6] In the worst case only four missing entries can lead to ambiguous solutions. For an erasure channel it is very unlikely that 17 successful transmissions are enough to reproduce the puzzle. There are only about 50,000 known solutions with 17 given entries.[7] Density evolution Density evolution is a capacity analysis algorithm originally developed for low-density parity-check codes on belief propagation decoding.[8] Density evolution can also be applied to Sudoku-type constraints.[1] One important simplification used in density evolution on LDPC codes is the sufficiency to analyze only the all-one codeword. With the Sudoku constraints however, this is not a valid codeword. Unlike for linear codes the weight-distance equivalence property does not hold for non-linear codes. Therewith it is necessary to compute density evolution recursions for every possible Sudoku puzzle to get precise performance analysis. A proposed simplification is to analyze the probability distribution of the cardinalities of messages instead of the probability distribution of the message.[1] Density evolution is calculated on the entry nodes and the constraint nodes (compare Tanner graph above). On the entry nodes one analyzes the cardinalities of the constraints. If for example the constraints have the cardinalities $(1,1)$ then the entry can only be of one symbol. If the constraints have cardinalities $(2,2)$ then both constraints allow two different symbols. For both constraints the correct symbol is contained for sure, lets assume the correct symbol is $1$. The other symbol can be equal or different for the constraints. If the symbols are different then the correct symbol is determined. If the second symbol is equal, lets assume $2$ the output symbols are of cardinality $2$ i.e. the symbols $\left\{1,2\right\}$. Depending on the alphabet size ($q$) the probability for the unique output for the input cardinalities$(2,2)$ is ${\begin{aligned}p_{1}^{(2,2)}=1-{\frac {1}{q-1}}\end{aligned}}$ and for output of cardinality 2 ${\begin{aligned}p_{2}^{(2,2)}={\frac {1}{q-1}}.\end{aligned}}$ For a standard $9\times 9$ Sudoku this results in a probability of $7/8$ for a unique solution. An analog calculation is done for all cardinality combinations. In the end the distribution of output cardinalities are summed up from the results. Note that the order of the input cardinality is interchangeable. The calculation of non-decreasing constraint combinations is therewith sufficient. For constraint nodes the procedure is somewhat similar and described in the following example based on a $4\times 4$ standard Sudoku. Inputs to the constraint nodes are the possible symbols of the connected entry nodes. Cardinality 1 means that the symbol of the source node is already determined. Again a non-decreasing analysis is sufficient. Lets assume the true output value is 4 and the inputs have cardinalities $(1,1,2)$ with the true symbols 1-2-3. The messages with cardinality 1 are $\left\{1\right\}$ and $\left\{2\right\}$. The message of cardinality 2 might be $\left\{1,3\right\}$, $\left\{2,3\right\}$ or $\left\{3,4\right\}$ as the true symbol 3 must be contained. In two of three cases the output is the correct symbol 4 with cardinality 1: $\left\{1\right\}$, $\left\{2\right\}$, $\left\{1,3\right\}$ and $\left\{1\right\}$, $\left\{2\right\}$, $\left\{2,3\right\}$. In one of three case the output cardinality is 2:$\left\{1\right\}$, $\left\{2\right\}$, $\left\{3,4\right\}$. The output symbols in this case are $\left\{3,4\right\}$. The final output cardinality distribution can be expressed by summing over all possible input combinations. For a $4\times 4$ standard Sudoku these are 64 combinations that can be grouped to 20 non-decreasing ones.[1] If the cardinality converges to 1 the decoding is error-free. To find the threshold the erasure probability must be increased until the decoding error remains positive for any number of iterations. With the method of Sayir[1] density evolution recursions can be used to calculate thresholds also for Sudoku codes up to an alphabet size $q=8$. See also • Sudoku — main Sudoku article • Sudoku solving algorithms References 1. Sayir, Jossy; Atkins, Caroline (16 Jul 2014). "Density Evolution for SUDOKU codes on the Erasure Channel". Turbo Codes and Iterative Information Processing (ISTC), 2014 8th International Symposium on. arXiv:1407.4328. Bibcode:2014arXiv1407.4328A. 2. Sayir, Jossy (21 October 2014). "SUDOKU Codes, a class of non-linear iteratively decodable codes" (PDF). Retrieved 20 Dec 2015. 3. Khan, Sheehan; Jabbari, Shahab; Jabbari, Shahin; Ghanbarinejad, Majid. "Solving sudoku using probabilistic graphical models" (PDF). Retrieved 20 December 2015. 4. Moon, T.K.; Gunther, J.H. (2006-07-01). "Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku". 2006 IEEE Mountain Workshop on Adaptive and Learning Systems. pp. 122–126. doi:10.1109/SMCALS.2006.250702. ISBN 978-1-4244-0166-6. S2CID 6131578. 5. Sayir, J.; Sarwar, J. (2015-06-01). "An investigation of SUDOKU-inspired non-linear codes with local constraints". 2015 IEEE International Symposium on Information Theory (ISIT). pp. 1921–1925. arXiv:1504.03946. doi:10.1109/ISIT.2015.7282790. ISBN 978-1-4673-7704-1. S2CID 5893535. 6. McGuire, Gary; Tugemann, Bastian; Civario, Gilles (2012-01-01). "There is no 16-Clue Sudoku: Solving the Sudoku Minimum Number of Clues Problem". arXiv:1201.0749 [cs.DS]. 7. "Minimum Sudoku". staffhome.ecm.uwa.edu.au. Retrieved 2015-12-20. 8. Chung, Sae-Young; Richardson, T.J.; Urbanke, R.L. (2001-02-01). "Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation". IEEE Transactions on Information Theory. 47 (2): 657–670. CiteSeerX 10.1.1.106.7729. doi:10.1109/18.910580. ISSN 0018-9448.
Wikipedia
Sue Chandler F. Sue Chandler, known as Sue (or Suzanne) Chandler, is a British schoolteacher and textbook writer, who, together with co-author Linda Bostock, wrote the "Bostock and Chandler" series of textbooks for advanced level mathematics in the UK. At the time she began the series, she was a full-time mathematics teacher at Southgate Technical College in Southgate, London. She eventually stopped teaching courses and focused on textbook writing. Her books have sold more than 6 million copies.[1][2] Selected publications Textbooks • L. Bostock & S. Chandler (1975), Applied Mathematics, Vol. 1, Stanley Thornes • L. Bostock & S. Chandler (1976), Applied Mathematics, Vol. 2, Stanley Thornes • L. Bostock & S. Chandler (1978), Pure Mathematics, Vol. 1, Stanley Thornes • L. Bostock & S. Chandler (1979), Pure Mathematics, Vol. 2, Stanley Thornes • L. Bostock & S. Chandler (1981), Mathematics – The Core Course for A Level (2nd ed.), Stanley Thornes • L. Bostock, S. Chandler, and C. P. Rourke (1982), Further Pure Mathematics, Stanley Thornes{{citation}}: CS1 maint: multiple names: authors list (link) • L. Bostock & S. Chandler (1984), Mathematics – Mechanics and Probability, Stanley Thornes • L. Bostock & S. Chandler (1985), Further Mechanics and Probability, Stanley Thornes • L. Bostock & S. Chandler (1994), Core Maths for 'A' Level (2nd ed.), Nelson Thornes • L. Bostock & S. Chandler (2000), Core Maths for 'A' Level (3rd ed.), Nelson Thornes Other • Chandler, Sue (1997), "A-level mathematics examinations as a fair assessment of the needs of students post-GCSE intermediate and higher tiers", Teaching Mathematics and Its Applications, 16 (4): 157–159, doi:10.1093/teamat/16.4.157. References 1. Chandler, Sue (1992), "Mathematical People: Sue Chandler", The Mathematical Gazette, 76 (476): 327–329, doi:10.2307/3619188, JSTOR 3619188, S2CID 189661275. 2. Oxford University Press: Sue Chandler - Author of STP Mathematics (Accessed Jan 21 2016)
Wikipedia
Sue Whitesides Sue Hays Whitesides is a Canadian mathematician and computer scientist, a professor emeritus of computer science and the chair of the computer science department at the University of Victoria in British Columbia, Canada.[1][2] Her research specializations include computational geometry and graph drawing. Sue Whitesides Sue Whitesides at the Workshop on Theory and Practice of Graph Drawing in 2012 Academic background Alma materUniversity of Wisconsin–Madison (PhD) ThesisCollineations of Projective Planes of Order 10 (1975) Doctoral advisorRichard Bruck Academic work DisciplineMathematics, computer science Sub-disciplineComputational geometry, graph drawing InstitutionsUniversity of Victoria McGill University Dartmouth College Doctoral studentsVida Dujmović Education and career Whitesides received her Ph.D. in mathematics in 1975 from the University of Wisconsin–Madison, under the supervision of Richard Bruck.[3] Before joining the University of Victoria faculty, she taught at Dartmouth College and McGill University;[3] at McGill, she was director of the School of Computer Science from 2005 to 2008.[4][5] Service Whitesides was the program chair for the 1998 International Symposium on Graph Drawing[6] and program co-chair for the 2012 Symposium on Computational Geometry.[7] References 1. Faculty profile Archived 2013-01-01 at archive.today, Univ. of Victoria, retrieved 2012-09-30. 2. "Affiliated faculty - University of Victoria". www.uvic.ca. Retrieved 2018-09-22. 3. Sue Hays Whitesides at the Mathematics Genealogy Project 4. Computer science summer camp: High school students spend week programming, McGill Reporter, September 8, 2005. 5. Morgan Stanley boosts info-tech sector Archived 2008-05-03 at the Wayback Machine, Montreal Gazette, May 2, 2008. 6. Graph Drawing 1998 web site Archived 2013-04-29 at the Wayback Machine, retrieved 2012-09-30. 7. SoCG 2012 web site, retrieved 2012-09-30. External links • Sue Whitesides at DBLP Bibliography Server Authority control: Academics • Association for Computing Machinery • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • Scopus
Wikipedia
Eventually (mathematics) In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it doesn't have the said property across all its ordered instances, but will after some instances have passed. The use of the term "eventually" can be often rephrased as "for sufficiently large numbers",[1] and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of $\mathbb {R} $). Notation The general form where the phrase eventually (or sufficiently large) is found appears as follows: $P$ is eventually true for $x$ ($P$ is true for sufficiently large $x$), where $\forall $ and $\exists $ are the universal and existential quantifiers, which is actually a shorthand for: $\exists a\in \mathbb {R} $ such that $P$ is true $\forall x\geq a$ or somewhat more formally: $\exists a\in \mathbb {R} :\forall x\in \mathbb {R} :x\geq a\Rightarrow P(x)$ :\forall x\in \mathbb {R} :x\geq a\Rightarrow P(x)} This does not necessarily mean that any particular value for $a$ is known, but only that such an $a$ exists. The phrase "sufficiently large" should not be confused with the phrases "arbitrarily large" or "infinitely large". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. Motivation and definition For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property eventually, or equivalently, that the property is satisfied by one of its subsequences $(a_{n})_{n\geq N}$, for some $N\in \mathbb {N} $.[2] For example, the definition of a sequence of real numbers $(a_{n})$ converging to some limit $a$ is: For each positive number $\varepsilon $, there exists a natural number $N$ such that for all $n>N$, $\left\vert a_{n}-a\right\vert <\varepsilon $. When the term "eventually" is used as a shorthand for "there exists a natural number $N$ such that for all $n>N$", the convergence definition can be restated more simply as: For each positive number $\varepsilon >0$, eventually $\left\vert a_{n}-a\right\vert <\varepsilon $. Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has a maximum element. As a result, the use of "eventually" in this case is synonymous with the expression "for all but a finite number of terms" – a special case of the expression "for almost all terms" (although "almost all" can also be used to allow for infinitely many exceptions as well). At the basic level, a sequence can be thought of as a function with natural numbers as its domain, and the notion of "eventually" applies to functions on more general sets as well—in particular to those that have an ordering with no greatest element. More specifically, if $S$ is such a set and there is an element $s$ in $S$ such that the function $f$ is defined for all elements greater than $s$, then $f$ is said to have some property eventually if there is an element $x_{0}$ such that whenever $x>x_{0}$, $f(x)$ has the said property. This notion is used, for example, in the study of Hardy fields, which are fields made up of real functions, each of which have certain properties eventually. Examples • "All primes greater than 2 are odd" can be written as "Eventually, all primes are odd.” • Eventually, all primes are congruent to ±1 modulo 6. • The square of a prime is eventually congruent to 1 mod 24 (specifically, this is true for all primes greater than 3). • The factorial of a natural number eventually ends in the digit 0 (specifically, this is true for all natural numbers greater than 4). Implications When a sequence or a function has a property eventually, it can have useful implications in the context of proving something in relation to that sequence. For example, in the context of the asymptotic behavior of certain functions, it can be useful to know if it eventually behaves differently than would or could be observed computationally, since otherwise this could not be noticed. The term "eventually" can be also incorporated into many mathematical definitions to make them more concise. These include the definitions of some types of limits (as seen above), and the Big O notation for describing asymptotic behavior. Other uses in mathematics • A 3-manifold is called sufficiently large if it contains a properly embedded 2-sided incompressible surface. This property is the main requirement for a 3-manifold to be called a Haken manifold. • Temporal logic introduces an operator that can be used to express statements interpretable as: Certain property will eventually hold in a future moment in time. See also • Almost all • Big O notation • Mathematical jargon • Number theory References 1. Weisstein, Eric W. "Sufficiently Large". mathworld.wolfram.com. Retrieved 2019-11-20. 2. Weisstein, Eric W. "Eventually". mathworld.wolfram.com. Retrieved 2019-11-20.
Wikipedia
Sug Woo Shin Sug Woo Shin is a professor of mathematics at the University of California, Berkeley working in number theory, automorphic forms, and the Langlands program. Sug Woo Shin Alma materHarvard University AwardsSloan Fellowship (2013) Scientific career FieldsMathematics InstitutionsUniversity of California, Berkeley Massachusetts Institute of Technology University of Chicago Institute for Advanced Study ThesisCounting Points on Igusa Varieties (2007) Doctoral advisorRichard Taylor Education From 1994 to 1996 when he was in Seoul Science High School, Shin won two gold medals (including a perfect score in 1995) and one bronze medal while representing South Korea at the International Mathematical Olympiad.[1][2] He graduated from Seoul National University with a Bachelor of Science degree in mathematics in 2000.[1] He received his PhD in mathematics from Harvard University in 2007 under the supervision of Richard Taylor.[3] Career Shin was a member of the Institute for Advanced Study from 2007 to 2008, a Dickson Instructor at the University of Chicago from 2008 to 2010, and again a member at the Institute for Advanced Study from 2010 to 2011.[1] He was an assistant professor of mathematics at the Massachusetts Institute of Technology from 2011 to 2014.[1] In 2014, Shin moved to the Department of Mathematics at the University of California, Berkeley as an associate professor.[1] In 2020, Shin became a full professor of mathematics at the University of California, Berkeley.[4] Shin is a visiting KIAS scholar at the Korea Institute for Advanced Study and a visiting associate member of the Pohang Mathematics Institute.[1] Research In 2011, Michael Harris[5] and Shin[6] resolved the dependencies on improved forms of the Arthur–Selberg trace formula in the conditional proofs of generalizations of the Sato–Tate conjecture by Harris (for products of non-isogenous elliptic curves)[7] and Barnet-Lamb–Geraghty–Harris–Taylor (for arbitrary non-CM holomorphic modular forms of weight greater than or equal to two).[8] Awards Shin received a Sloan Fellowship in 2013.[1] Selected publications • Scholze, Peter; Shin, Sug Woo (2012). "On the cohomology of compact unitary group Shimura varieties at ramified split places". Journal of the American Mathematical Society. 26 (1): 261–294. arXiv:1110.0232. doi:10.1090/S0894-0347-2012-00752-8. ISSN 0894-0347. S2CID 2084602. • Shin, Sug Woo (2011). "Galois representations arising from some compact Shimura varieties". Annals of Mathematics. Second Series. 173 (3): 1645–1741. doi:10.4007/annals.2011.173.3.9. ISSN 0003-486X. • Shin, Sug Woo (2009). "Counting points on Igusa varieties". Duke Mathematical Journal. 146 (3): 509–568. doi:10.1215/00127094-2009-004. ISSN 0012-7094. • Shin, Sug Woo; Templier, Nicolas (2016). "Sato–Tate theorem for families and low-lying zeros of automorphic L-functions". Inventiones Mathematicae. 203 (1): 1–177. Bibcode:2016InMat.203....1S. doi:10.1007/s00222-015-0583-y. ISSN 0020-9910. References 1. "Curriculum Vitae (Sug Woo Shin)" (PDF). January 2021. Retrieved March 10, 2021. 2. "Sug Woo Shin". International Mathematical Olympiad. Retrieved March 10, 2021. 3. Sug Woo Shin at the Mathematics Genealogy Project 4. "Sug Woo Shin". University of California, Berkeley. Retrieved December 30, 2020. 5. Harris, M. (2011). "An introduction to the stable trace formula". In Clozel, L.; Harris, M.; Labesse, J.-P.; Ngô, B. C. (eds.). The stable trace formula, Shimura varieties, and arithmetic applications. Vol. I: Stabilization of the trace formula. Boston: International Press. pp. 3–47. ISBN 978-1-57146-227-5. 6. Shin, Sug Woo (2011). "Galois representations arising from some compact Shimura varieties". Annals of Mathematics. Second Series. 173 (3): 1645–1741. doi:10.4007/annals.2011.173.3.9. ISSN 0003-486X. 7. Carayol's Bourbaki seminar of 17 June 2007 8. Barnet-Lamb, Thomas; Geraghty, David; Harris, Michael; Taylor, Richard (2011). "A family of Calabi–Yau varieties and potential automorphy. II". Publ. Res. Inst. Math. Sci. 47 (1): 29–98. doi:10.2977/PRIMS/31. MR 2827723. External links • Sug Woo Shin at the Mathematics Genealogy Project Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Sugeno integral In mathematics, the Sugeno integral, named after M. Sugeno,[1] is a type of integral with respect to a fuzzy measure. Let $(X,\Omega )$ be a measurable space and let $h:X\to [0,1]$ be an $\Omega $-measurable function. The Sugeno integral over the crisp set $A\subseteq X$ of the function $h$ with respect to the fuzzy measure $g$ is defined by: $\int _{A}h(x)\circ g={\sup _{E\subseteq X}}\left[\min \left(\min _{x\in E}h(x),g(A\cap E)\right)\right]={\sup _{\alpha \in [0,1]}}\left[\min \left(\alpha ,g(A\cap F_{\alpha })\right)\right]$ where $F_{\alpha }=\left\{x|h(x)\geq \alpha \right\}$. The Sugeno integral over the fuzzy set ${\tilde {A}}$ of the function $h$ with respect to the fuzzy measure $g$ is defined by: $\int _{A}h(x)\circ g=\int _{X}\left[h_{A}(x)\wedge h(x)\right]\circ g$ where $h_{A}(x)$ is the membership function of the fuzzy set ${\tilde {A}}$. Usage and Relationships Sugeno integral is related to h-index.[2] References 1. Sugeno, M. (1974) Theory of fuzzy integrals and its applications, Doctoral. Thesis, Tokyo Institute of Technology 2. Mesiar, Radko; Gagolewski, Marek (December 2016). "H-Index and Other Sugeno Integrals: Some Defects and Their Compensation". IEEE Transactions on Fuzzy Systems. 24 (6): 1668–1672. doi:10.1109/TFUZZ.2016.2516579. ISSN 1941-0034. • Gunther Schmidt (2006) Relational measures and integration, Lecture Notes in Computer Science # 4136, pages 343−57, Springer books • M. Sugeno & T. Murofushi (1987) "Pseudo-additive measures and integrals", Journal of Mathematical Analysis and Applications 122: 197−222 MR0874969
Wikipedia
Layered graph drawing Layered graph drawing or hierarchical graph drawing is a type of graph drawing in which the vertices of a directed graph are drawn in horizontal rows or layers with the edges generally directed downwards.[1][2][3] It is also known as Sugiyama-style graph drawing after Kozo Sugiyama, who first developed this drawing style.[4] The ideal form for a layered drawing would be an upward planar drawing, in which all edges are oriented in a consistent direction and no pairs of edges cross. However, graphs often contain cycles, minimizing the number of inconsistently-oriented edges is NP-hard, and minimizing the number of crossings is also NP-hard, so layered graph drawing systems typically apply a sequence of heuristics that reduce these types of flaws in the drawing without guaranteeing to find a drawing with the minimum number of flaws. Layout algorithm The construction of a layered graph drawing proceeds in a sequence of steps: • If the input graph is not already a directed acyclic graph, a set of edges is identified the reversal of which will make it acyclic. Finding the smallest possible set of edges is the NP-complete feedback arc set problem, so often greedy heuristics are used here in place of exact optimization algorithms.[1][2][3][5][6][7] The exact solution to this problem can be formulated using integer programming.[3] Alternatively, if the number of reversed edges is very small, these edges can be found by a fixed-parameter-tractable algorithm.[8] • The vertices of the directed acyclic graph resulting from the first step are assigned to layers, such that each edge goes from a higher layer to a lower layer. The goals of this stage are to simultaneously produce a small number of layers, few edges that span large numbers of layers, and a balanced assignment of vertices to layers.[1][2][3] For instance, by Mirsky's theorem, assigning vertices by layers according to the length of the longest path starting from each vertex produces an assignment with the minimum possible number of layers.[1][3] The Coffman–Graham algorithm may be used to find a layering with a predetermined limit on the number of vertices per layer and approximately minimizing the number of layers subject to that constraint.[1][2][3] Minimizing the width of the widest layer is NP-hard but may be solved by branch-and-cut or approximated heuristically.[3] Alternatively, the problem of minimizing the total number of layers spanned by the edges (without any limits on the number of vertices per layer) may be solved using linear programming.[9] Integer programming procedures, although more time-consuming, may be used to combine edge length minimization with limits on the number of vertices per level.[10] • Edges that span multiple layers are replaced by paths of dummy vertices so that, after this step, each edge in the expanded graph connects two vertices on adjacent layers of the drawing.[1][2] • As an optional step, a layer of edge concentrator vertices (or confluent junctions) may be imposed between two existing vertex layers, reducing the edge density by replacing complete bipartite subgraphs by stars through these edge concentrators.[3][11][12] • The vertices within each layer are permuted in an attempt to reduce the number of crossings among the edges connecting it to the previous layer.[1][2][3] Finding the minimum number of crossings or finding a maximum crossing-free set of edges is NP-complete, even when ordering a single layer at a time in this way,[13][14] so again it is typical to resort to heuristics, such as placing each vertex at a position determined by finding the average or median of the positions of its neighbors on the previous level and then swapping adjacent pairs as long as that improves the number of crossings.[1][2][9][14][15] Alternatively, the ordering of the vertices in one layer at a time may be chosen using an algorithm that is fixed-parameter tractable in the number of crossings between it and the previous layer.[3][16] • Each vertex is assigned a coordinate within its layer, consistent with the permutation calculated in the previous step.[1][2] Considerations in this step include placing dummy nodes on a line between their two neighbors to prevent unnecessary bends, and placing each vertex in a centered position with respect to its neighbors.[3] Sugiyama's original work proposed a quadratic programming formulation of this step; a later method of Brandes and Köpf takes linear time and guarantees at most two bends per edge.[3][17] • The edges reversed in the first step of the algorithm are returned to their original orientations, the dummy vertices are removed from the graph and the vertices and edges are drawn. To avoid intersections between vertices and edges, edges that span multiple layers of the drawing may be drawn as polygonal chains or spline curves passing through each of the positions assigned to the dummy vertices along the edge.[1][2][9] Implementations In its simplest form, layered graph drawing algorithms may require O(mn) time in graphs with n vertices and m edges, because of the large number of dummy vertices that may be created. However, for some variants of the algorithm, it is possible to simulate the effect of the dummy vertices without actually constructing them explicitly, leading to a near-linear time implementation.[18] The "dot" tool in Graphviz produces layered drawings.[9] A layered graph drawing algorithm is also included in Microsoft Automatic Graph Layout[19] and in Tulip.[20] Variations Although typically drawn with vertices in rows and edges proceeding from top to bottom, layered graph drawing algorithms may instead be drawn with vertices in columns and edges proceeding from left to right.[21] The same algorithmic framework has also been applied to radial layouts in which the graphs are arranged in concentric circles around some starting node[3][22] and to three-dimensional layered drawings of graphs.[3][23] In layered graph drawings with many long edges, edge clutter may be reduced by grouping sets of edges into bundles and routing them together through the same set of dummy vertices.[24] Similarly, for drawings with many edges crossing between pairs of consecutive layers, the edges in maximal bipartite subgraphs may be grouped into confluent bundles.[25] Drawings in which the vertices are arranged in layers may be constructed by algorithms that do not follow Sugiyama's framework. For instance, it is possible to tell whether an undirected graph has a drawing with at most k crossings, using h layers, in an amount of time that is polynomial for any fixed choice of k and h, using the fact that the graphs that have drawings of this type have bounded pathwidth.[26] For layered drawings of concept lattices, a hybrid approach combining Sugiyama's framework with additive methods (in which each vertex represents a set and the position of the vertex is a sum of vectors representing elements in the set) may be used. In this hybrid approach, the vertex permutation and coordinate assignment phases of the algorithm are replaced by a single phase in which the horizontal position of each vertex is chosen as a sum of scalars representing the elements for that vertex.[27] Layered graph drawing methods have also been used to provide an initial placement for force-directed graph drawing algorithms.[28] References 1. Di Battista, Giuseppe; Eades, Peter; Tamassia, Roberto; Tollis, Ioannis G. (1998), "Layered Drawings of Digraphs", Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, pp. 265–302, ISBN 978-0-13-301615-4. 2. Bastert, Oliver; Matuszewski, Christian (2001), "Layered drawings of digraphs", in Kaufmann, Michael; Wagner, Dorothea (eds.), Drawing Graphs: Methods and Models, Lecture Notes in Computer Science, vol. 2025, Springer-Verlag, pp. 87–120, doi:10.1007/3-540-44969-8_5, ISBN 978-3-540-42062-0. 3. Healy, Patrick; Nikolov, Nikola S. (2014), "Hierarchical Graph Drawing", in Tamassia, Roberto (ed.), Handbook of Graph Drawing and Visualization, CRC Press, pp. 409–453. 4. Sugiyama, Kozo; Tagawa, Shôjirô; Toda, Mitsuhiko (1981), "Methods for visual understanding of hierarchical system structures", IEEE Transactions on Systems, Man, and Cybernetics, SMC-11 (2): 109–125, doi:10.1109/TSMC.1981.4308636, MR 0611436, S2CID 8367756. 5. Berger, B.; Shor, P. (1990), "Approximation algorithms for the maximum acyclic subgraph problem", Proceedings of the 1st ACM-SIAM Symposium on Discrete Algorithms (SODA'90), pp. 236–243, ISBN 9780898712513. 6. Eades, P.; Lin, X.; Smyth, W. F. (1993), "A fast and effective heuristic for the feedback arc set problem", Information Processing Letters, 47 (6): 319–323, doi:10.1016/0020-0190(93)90079-O. 7. Eades, P.; Lin, X. (1995), "A new heuristic for the feedback arc set problem", Australian Journal of Combinatorics, 12: 15–26. 8. Chen, Jianer; Liu, Yang; Lu, Songjian; O'Sullivan, Barry; Razgon, Igor (2008), "A fixed-parameter algorithm for the directed feedback vertex set problem", Journal of the ACM, 55 (5): 1, doi:10.1145/1411509.1411511, S2CID 1547510. 9. Gansner, E. R.; Koutsofios, E.; North, S. C.; Vo, K.-P. (1993), "A technique for drawing directed graphs", IEEE Transactions on Software Engineering, 19 (3): 214–230, doi:10.1109/32.221135. 10. Healy, Patrick; Nikolov, Nikola S. (2002), "How to layer a directed acyclic graph", Graph Drawing: 9th International Symposium, GD 2001 Vienna, Austria, September 23–26, 2001, Revised Papers, Lecture Notes in Computer Science, vol. 2265, Springer-Verlag, pp. 16–30, doi:10.1007/3-540-45848-4_2, ISBN 978-3-540-43309-5, MR 1962416. 11. Newbery, F. J. (1989), "Edge concentration: a method for clustering directed graphs", Proceedings of the 2nd International Workshop on Software Configuration Management (SCM '89), Princeton, New Jersey, USA, Association for Computing Machinery, pp. 76–85, doi:10.1145/72910.73350, ISBN 0-89791-334-5, S2CID 195722969. 12. Eppstein, David; Goodrich, Michael T.; Meng, Jeremy Yu (2004), "Confluent layered drawings", in Pach, János (ed.), Proc. 12th Int. Symp. Graph Drawing (GD 2004), Lecture Notes in Computer Science, vol. 47 (3383 ed.), Springer-Verlag, pp. 184–194, arXiv:cs.CG/0507051, doi:10.1007/s00453-006-0159-8, S2CID 1169. 13. Eades, Peter; Whitesides, Sue (1994), "Drawing graphs in two layers", Theoretical Computer Science, 131 (2): 361–374, doi:10.1016/0304-3975(94)90179-1. 14. Eades, Peter; Wormald, Nicholas C. (1994), "Edge crossings in drawings of bipartite graphs", Algorithmica, 11 (4): 379–403, doi:10.1007/BF01187020, S2CID 22476033. 15. Mäkinen, E. (1990), "Experiments on drawing 2-level hierarchical graphs", International Journal of Computer Mathematics, 36 (3–4): 175–181, doi:10.1080/00207169008803921. 16. Dujmović, Vida; Fernau, Henning; Kaufmann, Michael (2008), "Fixed parameter algorithms for one-sided crossing minimization revisited", Journal of Discrete Algorithms, 6 (2): 313–323, doi:10.1016/j.jda.2006.12.008, MR 2418986. 17. Brandes, Ulrik; Köpf, Boris (2002), "Fast and simple horizontal coordinate assignment", Graph drawing (Vienna, 2001), Lecture Notes in Computer Science, vol. 2265, Berlin: Springer, pp. 31–44, doi:10.1007/3-540-45848-4_3, ISBN 978-3-540-43309-5, MR 1962417. 18. Eiglsperger, Markus; Siebenhaller, Martin; Kaufmann, Michael (2005), "An efficient implementation of Sugiyama's algorithm for layered graph drawing", Graph Drawing, 12th International Symposium, GD 2004, New York, NY, USA, September 29-October 2, 2004, Revised Selected Papers, Lecture Notes in Computer Science, vol. 3383, Springer-Verlag, pp. 155–166, doi:10.1007/978-3-540-31843-9_17, ISBN 978-3-540-24528-5. 19. Nachmanson, Lev; Robertson, George; Lee, Bongshin (2008), "Drawing Graphs with GLEE" (PDF), in Hong, Seok-Hee; Nishizeki, Takao; Quan, Wu (eds.), Graph Drawing, 15th International Symposium, GD 2007, Sydney, Australia, September 24–26, 2007, Revised Papers, Lecture Notes in Computer Science, vol. 4875, Springer-Verlag, pp. 389–394, doi:10.1007/978-3-540-77537-9_38, ISBN 978-3-540-77536-2. 20. Auber, David (2004), "Tulip – A Huge Graph Visualization Framework", in Jünger, Michael; Mutzel, Petra (eds.), Graph Drawing Software, Springer-Verlag, ISBN 978-3-540-00881-1. 21. Baburin, Danil E. (2002), "Some modifications of Sugiyama approach", Graph Drawing, 10th International Symposium, GD 2002, Irvine, CA, USA, August 26–28, 2002, Revised Papers, Lecture Notes in Computer Science, vol. 2528, Springer-Verlag, pp. 366–367, doi:10.1007/3-540-36151-0_36, ISBN 978-3-540-00158-4. 22. Bachmaier, Christian (2007), "A radial adaptation of the Sugiyama framework for visualizing hierarchical information", IEEE Transactions on Visualization and Computer Graphics, 13 (3): 583–594, doi:10.1109/TVCG.2007.1000, PMID 17356223, S2CID 9852297. 23. Hong, Seok-Hee; Nikolov, Nikola S. (2005), "Layered drawings of directed graphs in three dimensions", Proceedings of the 2005 Asia-Pacific Symposium on Information Visualisation (APVis '05), Conferences in Research and Practice in Information Technology, vol. 45, pp. 69–74, ISBN 9781920682279. 24. Pupyrev, Sergey; Nachmanson, Lev; Kaufmann, Michael (2011), "Improving layered graph layouts with edge bundling", Graph Drawing, 18th International Symposium, GD 2010, Konstanz, Germany, September 21-24, 2010, Revised Selected Papers, Lecture Notes in Computer Science, vol. 6502, Springer-Verlag, pp. 329–340, doi:10.1007/978-3-642-18469-7_30, ISBN 978-3-642-18468-0. 25. Eppstein, David; Goodrich, Michael T.; Meng, Jeremy Yu (2007), "Confluent layered drawings", Algorithmica, 47 (4): 439–452, arXiv:cs/0507051, doi:10.1007/s00453-006-0159-8, S2CID 1169. 26. Dujmović, V.; Fellows, M.R.; Kitching, M.; Liotta, G.; McCartin, C.; Nishimura, N.; Ragde, P.; Rosamond, F.; Whitesides, S. (2008), "On the parameterized complexity of layered graph drawing", Algorithmica, 52 (2): 267–292, doi:10.1007/s00453-007-9151-1, S2CID 2298634. 27. Cole, Richard (2001). "Automated layout of concept lattices using layered diagrams and additive diagrams". Proceedings 24th Australian Computer Science Conference. ACSC 2001. Proceedings of the 24th Australasian Conference on Computer Science (ACSC '01). pp. 47–53. doi:10.1109/ACSC.2001.906622. ISBN 0-7695-0963-0. S2CID 7143873. {{cite book}}: |journal= ignored (help). 28. Benno Schwikowski; Peter Uetz & Stanley Fields (2000). "A network of protein−protein interactions in yeast". Nature Biotechnology. 18 (12): 1257–1261. doi:10.1038/82360. PMID 11101803. S2CID 3009359.
Wikipedia
Richard Swineshead Richard Swineshead (also Suisset, Suiseth, etc.; fl. c. 1340 – 1354) was an English mathematician, logician, and natural philosopher. He was perhaps the greatest of the Oxford Calculators of Merton College, where he was a fellow certainly by 1344 and possibly by 1340. His magnum opus was a series of treatises known as the Liber calculationum ("Book of Calculations"), written c. 1350, which earned him the nickname of The Calculator.[1] Robert Burton (d. 1640) wrote in The Anatomy of Melancholy that "Scaliger and Cardan admire Suisset the calculator, qui pene modum excessit humani ingenii [whose talents were almost superhuman]".[2] Gottfried Leibniz wrote in a letter of 1714: "Il y a eu autrefois un Suisse, qui avoit mathématisé dans la Scholastique: ses Ouvrages sont peu connus; mais ce que j'en ai vu m'a paru profond et considérable." ("There was once a Suisse, who did mathematics belonging to scholasticism; his works are little known, but what I have seen of them seemed to me profound and relevant.")[3][4] Leibniz even had a copy of one of Swineshead's treatises made from an edition in the Bibliothèque du Roi in Paris.[5] Notes 1. Boyer 1959, page 69. 2. Jackson, Holbrook (ed.) (1932), The Anatomy of Melancholy, i.77 (in "Democritus Junior to the Reader"). 3. Letter to M. M. Remond de Montmorency, quoted in Lardet, Pierre (2003) "Les ambitions de Jules–César Scaliger latiniste et philosophe (1484–1558) et sa rèception posthume dans l'aire germanique de Gesner et Schegk à Leibniz et à Kant", in Kessler & Kuhn (edd.), Germania latina – Latinitas teutonica, pp. 157–194. 4. Boyer 1959, p. 88, says the following. As late as the end of the seventeenth century the reputation of Calculator was such that Leibniz on several occasions referred to him as almost the first to apply mathematics to physics and as one who introduced mathematics into philosophy. 5. Duchesneau, François (1998) "Leibniz's Theoretical Shift in the Phoranomus and Dynamica de Potentia", Perspectives on Science 6, p. 105. References • Boyer, Carl Benjamin (1959). A History of the Calculus and Its Conceptual Development. Dover. ISBN 0-486-60509-4. • Molland, George (2004) "Swineshead, Richard", Oxford Dictionary of National Biography Authority control International • FAST • ISNI • 2 • VIAF National • Germany • Italy • Belgium • United States • Netherlands • Poland • Portugal • Vatican Academics • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Suita conjecture In mathematics, the Suita conjecture is a conjecture related to the theory of the Riemann surface, the boundary behavior of conformal maps, the theory of Bergman kernel, and the theory of the L2 extension. The conjecture states the following: Suita (1972): Let R be an Riemann surface, which admits a nontrivial Green function $G_{R}$. Let $\omega $ be a local coordinate on a neighborhood $V_{z_{0}}$ of $z_{0}\in R$ satisfying $w(z_{0})=0$. Let $\kappa R$ be the Bergman kernel for holomorphic (1, 0) forms on R. We define $B_{R}(z)|dw|^{2}:=\kappa _{R}(z)|_{V_{z_{0}}}$, and $B_{R}(z,{\overline {t}})d\omega \otimes d{\overline {t}}:=\kappa _{R}(z,{\overline {t}})$ . Let $c_{\beta }(z)$ be the logarithmic capacity which is locally defined by $c_{\beta }(z_{0}):=\exp \lim _{\xi \to z}(G_{R}(z,z_{0})-\log |\omega (z)|)$ on R. Then, the inequality $(c_{\beta }(z_{0}))^{2}\leq \pi B_{R}(z_{0})$ holds on the every open Riemann surface R, and also, with equality, then $B_{R}\equiv 0$ or, R is conformally equivalent to the unit disc less a (possible) closed set of inner capacity zero.[1] It was first proved by Błocki (2013) for the bounded plane domain and then completely in a more generalized version by Guan & Zhou (2015). Also, another proof of the Suita conjecture and some examples of its generalization to several complex variables (the multi (high) - dimensional Suita conjecture) were given in Błocki (2014a) and Błocki & Zwonek (2020). The multi (high) - dimensional Suita conjecture fails in non-pseudoconvex domains.[2] This conjecture was proved through the optimal estimation of the Ohsawa–Takegoshi L2 extension theorem. Notes 1. Guan & Zhou (2015) 2. Nikolov (2015), Nikolov & Thomas (2021) References • Błocki, Zbigniew (2013). "Suita conjecture and the Ohsawa-Takegoshi extension theorem". Inventiones Mathematicae. 193 (1): 149–158. Bibcode:2013InMat.193..149B. doi:10.1007/s00222-012-0423-2. S2CID 9209213. • Błocki, Zbigniew (2014a). "A Lower Bound for the Bergman Kernel and the Bourgain-Milman Inequality". Geometric Aspects of Functional Analysis: Israel Seminar (GAFA) 2011-2013. Lecture Notes in Mathematics. Vol. 2116. pp. 53–63. doi:10.1007/978-3-319-09477-9_4. ISBN 978-3-319-09476-2. • Błocki, Zbigniew (2014b). "Cauchy–Riemann meet Monge–Ampère". Bulletin of Mathematical Sciences. 4 (3): 433–480. doi:10.1007/s13373-014-0058-2. S2CID 53582451. • Błocki, Zbigniew (2017). "Suita Conjecture from the One-dimensional Viewpoint" (PDF). Analysis Meets Geometry. Trends in Mathematics. pp. 127–133. doi:10.1007/978-3-319-52471-9_9. ISBN 978-3-319-52469-6. S2CID 125704662. • Błocki, Zbigniew; Zwonek, Włodzimierz (2020). "Generalizations of the Higher Dimensional Suita Conjecture and Its Relation with a Problem of Wiegerinck". The Journal of Geometric Analysis. 30 (2): 1259–1270. arXiv:1811.02977. doi:10.1007/s12220-019-00343-8. S2CID 119622596. • Guan, Qi'an; Zhou, Xiangyu (2015). "A solution of an $L^{2}$ extension problem with optimal estimate and applications". Annals of Mathematics. 181 (3): 1139–1208. arXiv:1310.7169. doi:10.4007/annals.2015.181.3.6. JSTOR 24523356. S2CID 56205818. • Nikolov, Nikolai (2015). "Two remarks on the Suita conjecture". Annales Polonici Mathematici. 113: 61–63. arXiv:1411.6601. doi:10.4064/ap113-1-3. S2CID 119147234. • Nikolov, Nikolai; Thomas, Pascal J. (2021). "Growth of Sibony metric and Bergman kernel for domains with low regularity". Journal of Mathematical Analysis and Applications. 499: 125018. doi:10.1016/j.jmaa.2021.125018. S2CID 218581510. • Bousfield Classes and Ohkawa's Theorem. Springer Proceedings in Mathematics & Statistics. Vol. 309. 2020. doi:10.1007/978-981-15-1588-0. ISBN 978-981-15-1587-3. S2CID 242194764. • Ohsawa, Takeo (2017). "On the extension of $L^{2}$ holomorphic functions VIII — a remark on a theorem of Guan and Zhou". International Journal of Mathematics. 28 (9). doi:10.1142/S0129167X17400055. • Suita, Nobuyuki (1972). "Capacities and kernels on Riemann surfaces". Archive for Rational Mechanics and Analysis. 46 (3): 212–217. Bibcode:1972ArRMA..46..212S. doi:10.1007/BF00252460. S2CID 123118650.
Wikipedia
Idoneal number In mathematics, Euler's idoneal numbers (also called suitable numbers or convenient numbers) are the positive integers D such that any integer expressible in only one way as x2 ± Dy2 (where x2 is relatively prime to Dy2) is a prime power or twice a prime power. In particular, a number that has two distinct representations as a sum of two squares is composite. Every idoneal number generates a set containing infinitely many primes and missing infinitely many other primes. Definition A positive integer n is idoneal if and only if it cannot be written as ab + bc + ac for distinct positive integers a, b, and c.[1] It is sufficient to consider the set { n + k2 | 3 . k2 ≤ n ∧ gcd (n, k) = 1 }; if all these numbers are of the form p, p2, 2 · p or 2s for some integer s, where p is a prime, then n is idoneal.[2] Conjecturally complete listing Unsolved problem in mathematics: Are there 65, 66 or 67 idoneal numbers? (more unsolved problems in mathematics) The 65 idoneal numbers found by Leonhard Euler and Carl Friedrich Gauss and conjectured to be the only such numbers are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 18, 21, 22, 24, 25, 28, 30, 33, 37, 40, 42, 45, 48, 57, 58, 60, 70, 72, 78, 85, 88, 93, 102, 105, 112, 120, 130, 133, 165, 168, 177, 190, 210, 232, 240, 253, 273, 280, 312, 330, 345, 357, 385, 408, 462, 520, 760, 840, 1320, 1365, and 1848 (sequence A000926 in the OEIS). Results of Peter J. Weinberger from 1973[3] imply that at most two other idoneal numbers exist, and that the list above is complete if the generalized Riemann hypothesis holds (some sources incorrectly claim that Weinberger's results imply that there is at most one other idoneal number).[4] See also • List of unsolved problems in mathematics Notes 1. Eric Rains, OEIS: A000926 Comments on A000926, December 2007. 2. Roberts, Joe: The Lure of the Integers. The Mathematical Association of America, 1992 3. Acta Arith., 22 (1973), p. 117-124 4. Kani, Ernst (2011). "Idoneal numbers and some generalizations" (PDF). Annales des Sciences Mathématiques du Québec. 35 (2). Corollary 23, Remark 24. References • Z. I. Borevich and I. R. Shafarevich, Number Theory. Academic Press, NY, 1966, pp. 425–430. • D. A. Cox (1989). Primes of the Form x2 + ny2. Wiley-Interscience. p. 61. ISBN 0-471-50654-0. • L. Euler, "An illustration of a paradox about the idoneal, or suitable, numbers", 1806 • G. Frei, Euler's convenient numbers, Math. Intell. Vol. 7 No. 3 (1985), 55–58 and 64. • O-H. Keller, Ueber die "Numeri idonei" von Euler, Beitraege Algebra Geom., 16 (1983), 79–91. [Math. Rev. 85m:11019] • G. B. Mathews, Theory of Numbers, Chelsea, no date, p. 263. • P. Ribenboim, "Galimatias Arithmeticae", in Mathematics Magazine 71(5) 339 1998 MAA or, 'My Numbers, My Friends', Chap.11 Springer-Verlag 2000 NY • J. Steinig, On Euler's ideoneal numbers, Elemente Math., 21 (1966), 73–88. • A. Weil, Number theory: an approach through history; from Hammurapi to Legendre, Birkhaeuser, Boston, 1984; see p. 188. • P. Weinberger, Exponents of the class groups of complex quadratic fields, Acta Arith., 22 (1973), 117–124. • Ernst Kani, Idoneal Numbers And Some Generalizations, Ann. Sci. Math. Québec 35, No 2, (2011), 197-227. External links • K. S. Brown, Mathpages, Numeri Idonei • M. Waldschmidt, Open Diophantine problems • Weisstein, Eric W. "Idoneal Number". MathWorld. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Suken Suken (数検 or 実用数学技能検定, Jitsuyō Sūgaku Ginō Kentei, lit. Global Mathematics Certification) is a world mathematics certification program and examination established in Japan in 1988. Outline of Suken Each Suken level (Kyu) has two sections. Section 1 is calculation and Section 2 is application. Passing Rate In order to pass the Suken, you must correctly answer approximately 70% of section 1 and approximately 60% of section 2. Levels Level 5 (7th grade math) The examination time is 180 minutes for section 1, 60 minutes for section 2. Level 4 (8th grade) The examination time is 60 minutes for section 1, 60 minutes for section 2. 3rd Kyu, suits for 9th grade The examination time is 60 minutes for section 1, 60 minutes for section 2. Levels 5 - 3 include the following subjects: • Calculation with negative numbers • Inequalities • Simultaneous equations • Congruency and similarities • Square roots • Factorization • Quadratic equations and functions • The Pythagorean theorem • Probabilities Level pre-2 (10th grade) The examination time is 60 minutes for section 1, 90 minutes for section 2. Level 2 (11th grade) The examination time is 60 minutes for section 1, 90 minutes for section 2. Level pre-1st (12th grade) The examination time is 60 minutes for section 1, 120 minutes for section 2. Levels pre-2 - pre-1 include the following subjects: • Quadratic functions • Trigonometry • Sequences • Vectors • Complex numbers • Basic calculus • Matrices • Simple curved lines • Probability Level 1 (undergrad and graduate) The examination time is 60 minutes for section 1, 120 minutes for section 2. Level 1 includes the following subjects: • Linear algebra • Vectors • Matrices • Differential equations • Statistics • Probability References External links • Suken(in Japanese) • Suken USA
Wikipedia
Dennis Sullivan Dennis Parnell Sullivan (born February 12, 1941) is an American mathematician known for his work in algebraic topology, geometric topology, and dynamical systems. He holds the Albert Einstein Chair at the City University of New York Graduate Center and is a distinguished professor at Stony Brook University. Dennis Sullivan Sullivan in 2007 Born Dennis Parnell Sullivan (1941-02-12) February 12, 1941 Port Huron, Michigan, U.S. EducationRice University (BA) Princeton University (MA, PhD) Known for • Connes–Donaldson–Sullivan–Teleman index theorem • Parry–Sullivan invariant • Sullivan conjecture • Density conjecture • Localization of a topological space • No-wandering-domain theorem • Rational homotopy theory • String topology Awards • Oswald Veblen Prize in Geometry (1971) • National Medal of Science (2004) • Leroy P. Steele Prize (2006) • Wolf Prize (2010) • Balzan Prize (2014) • Abel Prize (2022) Scientific career FieldsMathematics InstitutionsStony Brook University City University of New York ThesisTriangulating Homotopy Equivalences (1966) Doctoral advisorWilliam Browder Doctoral studentsHarold Abelson Curtis T. McMullen Sullivan was awarded the Wolf Prize in Mathematics in 2010 and the Abel Prize in 2022. Early life and education Sullivan was born in Port Huron, Michigan, on February 12, 1941.[1][2] His family moved to Houston soon afterwards.[1][2] He entered Rice University to study chemical engineering but switched his major to mathematics in his second year after encountering a particularly motivating mathematical theorem.[2][3] The change was prompted by a special case of the uniformization theorem, according to which, in his own words: [A]ny surface topologically like a balloon, and no matter what shape—a banana or the statue of David by Michelangelo—could be placed on to a perfectly round sphere so that the stretching or squeezing required at each and every point is the same in all directions at each such point.[4] He received his Bachelor of Arts degree from Rice in 1963.[2] He obtained his Doctor of Philosophy from Princeton University in 1966 with his thesis, Triangulating homotopy equivalences, under the supervision of William Browder.[2][5] Career Sullivan worked at the University of Warwick on a NATO Fellowship from 1966 to 1967.[6] He was a Miller Research Fellow at the University of California, Berkeley from 1967 to 1969 and then a Sloan Fellow at Massachusetts Institute of Technology from 1969 to 1973.[6] He was a visiting scholar at the Institute for Advanced Study in 1967–1968, 1968–1970, and again in 1975.[7] Sullivan was an associate professor at Paris-Sud University from 1973 to 1974, and then became a permanent professor at the Institut des Hautes Études Scientifiques (IHÉS) in 1974.[6][8] In 1981, he became the Albert Einstein Chair in Science (Mathematics) at the Graduate Center, City University of New York[9] and reduced his duties at the IHÉS to a half-time appointment.[1] He joined the mathematics faculty at Stony Brook University in 1996[6] and left the IHÉS the following year.[6][8] Sullivan was involved in the founding of the Simons Center for Geometry and Physics and is a member of its board of trustees.[10] Research Geometric topology Along with Browder and his other students, Sullivan was an early adopter of surgery theory, particularly for classifying high-dimensional manifolds.[2][3][1] His thesis work was focused on the Hauptvermutung.[1] In an influential set of notes in 1970, Sullivan put forward the radical concept that, within homotopy theory, spaces could directly "be broken into boxes"[11] (or localized), a procedure hitherto applied to the algebraic constructs made from them.[3][12] The Sullivan conjecture, proved in its original form by Haynes Miller, states that the classifying space BG of a finite group G is sufficiently different from any finite CW complex X, that it maps to such an X only 'with difficulty'; in a more formal statement, the space of all mappings BG to X, as pointed spaces and given the compact-open topology, is weakly contractible.[13] Sullivan's conjecture was also first presented in his 1970 notes.[3][12][13] Sullivan and Daniel Quillen (independently) created rational homotopy theory in the late 1960s and 1970s.[14][15][3][16] It examines "rationalizations" of simply connected topological spaces with homotopy groups and singular homology groups tensored with the rational numbers, ignoring torsion elements and simplifying certain calculations.[16] Kleinian groups Sullivan and William Thurston generalized Lipman Bers' density conjecture from singly degenerate Kleinian surface groups to all finitely generated Kleinian groups in the late 1970s and early 1980s.[17][18] The conjecture states that every finitely generated Kleinian group is an algebraic limit of geometrically finite Kleinian groups, and was independently proven by Ohshika and Namazi–Souto in 2011 and 2012 respectively.[17][18] Conformal and quasiconformal mappings The Connes–Donaldson–Sullivan–Teleman index theorem is an extension of the Atiyah–Singer index theorem to quasiconformal manifolds due to a joint paper by Simon Donaldson and Sullivan in 1989 and a joint paper by Alain Connes, Sullivan, and Nicolae Teleman in 1994.[19][20] In 1987, Sullivan and Burton Rodin proved Thurston's conjecture about the approximation of the Riemann map by circle packings.[21] String topology Sullivan and Moira Chas started the field of string topology, which examines algebraic structures on the homology of free loop spaces.[22][23] They developed the Chas–Sullivan product to give a partial singular homology analogue of the cup product from singular cohomology.[22][23] String topology has been used in multiple proposals to construct topological quantum field theories in mathematical physics.[24] Dynamical systems In 1975, Sullivan and Bill Parry introduced the topological Parry–Sullivan invariant for flows in one-dimensional dynamical systems.[25][26] In 1985, Sullivan proved the no-wandering-domain theorem.[3] This result was described by mathematician Anthony Philips as leading to a "revival of holomorphic dynamics after 60 years of stagnation."[1] Awards and honors • 1971 Oswald Veblen Prize in Geometry[27] • 1981 Prix Élie Cartan, French Academy of Sciences[2][8] • 1983 Member, National Academy of Sciences[28] • 1991 Member, American Academy of Arts and Sciences[29] • 1994 King Faisal International Prize for Science[6] • 2004 National Medal of Science[6] • 2006 Steele Prize for lifetime achievement[6] • 2010 Wolf Prize in Mathematics, for "his contributions to algebraic topology and conformal dynamics"[30] • 2012 Fellow of the American Mathematical Society[31] • 2014 Balzan Prize in Mathematics (pure or applied)[2][32] • 2022 Abel Prize[2][33] Personal life Sullivan is married to fellow mathematician Moira Chas.[3][4] See also • Assembly map • Double bubble conjecture • Flexible polyhedron • Formal manifold • Loch Ness monster surface • Normal invariant • Ring lemma • Rummler–Sullivan theorem • Ruziewicz problem References 1. Phillips, Anthony (2005), "Dennis Sullivan – A Short History", in Lyubich, Mikhail; Takhtadzhi͡an, Leon Armenovich (eds.), Graphs and patterns in mathematics and theoretical physics, Proceedings of Symposia in Pure Mathematics, vol. 73, Providence: American Mathematical Society, p. xiii, ISBN 0-8218-3666-8, archived from the original on July 28, 2014, retrieved March 31, 2016. 2. Chang, Kenneth (March 23, 2022). "Abel Prize for 2022 Goes to New York Mathematician". The New York Times. Archived from the original on March 23, 2022. Retrieved March 23, 2022. 3. Cepelewicz, Jordana (March 23, 2022). "Dennis Sullivan, Uniter of Topology and Chaos, Wins the Abel Prize". Quanta Magazine. Archived from the original on March 23, 2022. Retrieved March 23, 2022. 4. Desikan, Shubashree (March 23, 2022). "Abel prize for 2022 goes to American mathematician Dennis P. Sullivan". The Hindu. Retrieved March 25, 2022. 5. Dennis Sullivan at the Mathematics Genealogy Project 6. "Dennis Parnell Sullivan Awarded the 2022 Abel Prize for Mathematics". Stony Brook University. March 23, 2022. Archived from the original on March 24, 2022. Retrieved March 23, 2022. 7. "Dennis P. Sullivan". Institute for Advanced Study. December 9, 2019. Archived from the original on March 23, 2022. Retrieved March 23, 2022. 8. "Dennis Sullivan, Mathematician". Institut des Hautes Études Scientifiques. Archived from the original on November 22, 2021. Retrieved March 23, 2022. 9. "Science Faculty Spotlight: Dennis Sullivan". Graduate Center, CUNY. April 29, 2017. Archived from the original on March 24, 2022. Retrieved March 23, 2022. 10. "Dennis Sullivan Awarded the 2022 Abel Prize in Mathematics". Simons Center for Geometry and Physics. March 23, 2022. Retrieved March 25, 2022. 11. Cepelewicz, Jordana (March 23, 2022). "Dennis Sullivan, Uniter of Topology and Chaos, Wins the Abel Prize". Quanta Magazine. Retrieved March 24, 2022. 12. Sullivan, Dennis P. (2005). Ranicki, Andrew (ed.). Geometric Topology: Localization, Periodicity and Galois Symmetry: The 1970 MIT Notes (PDF). K-Monographs in Mathematics. Dordrecht: Springer. ISBN 1-4020-3511-X. Archived (PDF) from the original on April 18, 2007. Retrieved October 8, 2006. 13. Miller, Haynes (1984). "The Sullivan Conjecture on Maps from Classifying Spaces". Annals of Mathematics. 120 (1): 39–87. doi:10.2307/2007071. JSTOR 2007071. 14. Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031 15. Sullivan, Dennis (1977). "Infinitesimal computations in topology". Publications Mathématiques de l'IHÉS. 47: 269–331. doi:10.1007/BF02684341. MR 0646078. S2CID 42019745. Archived from the original on May 3, 2007. Retrieved November 1, 2007. 16. Hess, Kathryn (1999). "A history of rational homotopy theory". In James, Ioan M. (ed.). History of Topology. Amsterdam: North-Holland. pp. 757–796. doi:10.1016/B978-044482375-5/50028-6. ISBN 0-444-82375-1. MR 1721122. 17. Namazi, Hossein; Souto, Juan (2012). "Non-realizability and ending laminations: Proof of the density conjecture". Acta Mathematica. 209 (2): 323–395. doi:10.1007/s11511-012-0088-0. ISSN 0001-5962. S2CID 10138438. 18. Ohshika, Ken'ichi (2011). "Realising end invariants by limits of minimally parabolic, geometrically finite groups". Geometry and Topology. 15 (2): 827–890. arXiv:math/0504546. doi:10.2140/gt.2011.15.827. ISSN 1364-0380. S2CID 14463721. Archived from the original on May 25, 2014. Retrieved March 24, 2022. 19. Donaldson, Simon K.; Sullivan, Dennis (1989). "Quasiconformal 4-manifolds". Acta Mathematica. 163: 181–252. doi:10.1007/BF02392736. Zbl 0704.57008. 20. Connes, Alain; Sullivan, Dennis; Teleman, Nicolae (1994). "Quasiconformal mappings, operators on Hilbert space and local formulae for characteristic classes". Topology. 33 (4): 663–681. doi:10.1016/0040-9383(94)90003-5. Zbl 0840.57013. 21. Rodin, Burton; Sullivan, Dennis (1987), "The convergence of circle packings to the Riemann mapping", Journal of Differential Geometry, 26 (2): 349–360, doi:10.4310/jdg/1214441375, archived from the original on October 27, 2020, retrieved March 23, 2022 22. Chas, Moira; Sullivan, Dennis (1999). "String Topology". arXiv:math/9911159v1. 23. Cohen, Ralph Louis; Jones, John D. S.; Yan, Jun (2004). "The loop homology algebra of spheres and projective spaces". In Arone, Gregory; Hubbuck, John; Levi, Ran; Weiss, Michael (eds.). Categorical decomposition techniques in algebraic topology: International Conference in Algebraic Topology, Isle of Skye, Scotland, June 2001. Birkhäuser. pp. 77–92. 24. Tamanoi, Hirotaka (2010). "Loop coproducts in string topology and triviality of higher genus TQFT operations". Journal of Pure and Applied Algebra. 214 (5): 605–615. arXiv:0706.1276. doi:10.1016/j.jpaa.2009.07.011. MR 2577666. S2CID 2147096. 25. Parry, Bill; Sullivan, Dennis (1975). "A topological invariant of flows on 1-dimensional spaces". Topology. 14 (4): 297–299. doi:10.1016/0040-9383(75)90012-9. 26. Sullivan, Michael C. (1997). "An invariant of basic sets of Smale flows". Ergodic Theory and Dynamical Systems. 17 (6): 1437–1448. doi:10.1017/S0143385797097617. S2CID 96462227. 27. "Oswald Veblen Prize in Geometry". Archived from the original on January 5, 2020. Retrieved August 17, 2020. 28. "National Academy of Sciences". Archived from the original on May 15, 2021. Retrieved August 17, 2020. 29. "American Academy of Arts and Sciences". Archived from the original on March 24, 2022. Retrieved August 17, 2020. 30. "Wolf Prize Winners Announced". Israel National News. Archived from the original on March 24, 2022. Retrieved March 23, 2022. 31. List of Fellows of the American Mathematical Society Archived December 5, 2012, at archive.today, retrieved August 5, 2013. 32. Kehoe, Elaine (January 2015). "Sullivan Awarded Balzan Prize". Notices of the American Mathematical Society. 62 (1): 54–55. doi:10.1090/noti1198. 33. "2022: Dennis Parnell Sullivan | The Abel Prize". abelprize.no. Archived from the original on March 23, 2022. Retrieved March 23, 2022. External links Wikimedia Commons has media related to Dennis Sullivan. • O'Connor, John J.; Robertson, Edmund F., "Dennis Sullivan", MacTutor History of Mathematics Archive, University of St Andrews • Dennis Sullivan at the Mathematics Genealogy Project • Sullivan's homepage at CUNY • Sullivan's homepage at Stony Brook University • Dennis Sullivan Archived May 28, 2018, at the Wayback Machine International Balzan Prize Foundation Laureates of the Wolf Prize in Mathematics 1970s • Israel Gelfand / Carl L. Siegel (1978) • Jean Leray / André Weil (1979) 1980s • Henri Cartan / Andrey Kolmogorov (1980) • Lars Ahlfors / Oscar Zariski (1981) • Hassler Whitney / Mark Krein (1982) • Shiing-Shen Chern / Paul Erdős (1983/84) • Kunihiko Kodaira / Hans Lewy (1984/85) • Samuel Eilenberg / Atle Selberg (1986) • Kiyosi Itô / Peter Lax (1987) • Friedrich Hirzebruch / Lars Hörmander (1988) • Alberto Calderón / John Milnor (1989) 1990s • Ennio de Giorgi / Ilya Piatetski-Shapiro (1990) • Lennart Carleson / John G. Thompson (1992) • Mikhail Gromov / Jacques Tits (1993) • Jürgen Moser (1994/95) • Robert Langlands / Andrew Wiles (1995/96) • Joseph Keller / Yakov G. Sinai (1996/97) • László Lovász / Elias M. Stein (1999) 2000s • Raoul Bott / Jean-Pierre Serre (2000) • Vladimir Arnold / Saharon Shelah (2001) • Mikio Sato / John Tate (2002/03) • Grigory Margulis / Sergei Novikov (2005) • Stephen Smale / Hillel Furstenberg (2006/07) • Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008) 2010s • Dennis Sullivan / Shing-Tung Yau (2010) • Michael Aschbacher / Luis Caffarelli (2012) • George Mostow / Michael Artin (2013) • Peter Sarnak (2014) • James G. Arthur (2015) • Richard Schoen / Charles Fefferman (2017) • Alexander Beilinson / Vladimir Drinfeld (2018) • Jean-François Le Gall / Gregory Lawler (2019) 2020s • Simon K. Donaldson / Yakov Eliashberg (2020) • George Lusztig (2022) • Ingrid Daubechies (2023)  Mathematics portal United States National Medal of Science laureates Behavioral and social science 1960s 1964 Neal Elgar Miller 1980s 1986 Herbert A. Simon 1987 Anne Anastasi George J. Stigler 1988 Milton Friedman 1990s 1990 Leonid Hurwicz Patrick Suppes 1991 George A. Miller 1992 Eleanor J. Gibson 1994 Robert K. Merton 1995 Roger N. Shepard 1996 Paul Samuelson 1997 William K. Estes 1998 William Julius Wilson 1999 Robert M. Solow 2000s 2000 Gary Becker 2003 R. Duncan Luce 2004 Kenneth Arrow 2005 Gordon H. Bower 2008 Michael I. Posner 2009 Mortimer Mishkin 2010s 2011 Anne Treisman 2014 Robert Axelrod 2015 Albert Bandura Biological sciences 1960s 1963 C. B. van Niel 1964 Theodosius Dobzhansky Marshall W. Nirenberg 1965 Francis P. Rous George G. Simpson Donald D. Van Slyke 1966 Edward F. Knipling Fritz Albert Lipmann William C. Rose Sewall Wright 1967 Kenneth S. Cole Harry F. Harlow Michael Heidelberger Alfred H. Sturtevant 1968 Horace Barker Bernard B. Brodie Detlev W. Bronk Jay Lush Burrhus Frederic Skinner 1969 Robert Huebner Ernst Mayr 1970s 1970 Barbara McClintock Albert B. Sabin 1973 Daniel I. Arnon Earl W. Sutherland Jr. 1974 Britton Chance Erwin Chargaff James V. Neel James Augustine Shannon 1975 Hallowell Davis Paul Gyorgy Sterling B. Hendricks Orville Alvin Vogel 1976 Roger Guillemin Keith Roberts Porter Efraim Racker E. O. Wilson 1979 Robert H. Burris Elizabeth C. Crosby Arthur Kornberg Severo Ochoa Earl Reece Stadtman George Ledyard Stebbins Paul Alfred Weiss 1980s 1981 Philip Handler 1982 Seymour Benzer Glenn W. Burton Mildred Cohn 1983 Howard L. Bachrach Paul Berg Wendell L. Roelofs Berta Scharrer 1986 Stanley Cohen Donald A. Henderson Vernon B. Mountcastle George Emil Palade Joan A. Steitz 1987 Michael E. DeBakey Theodor O. Diener Harry Eagle Har Gobind Khorana Rita Levi-Montalcini 1988 Michael S. Brown Stanley Norman Cohen Joseph L. Goldstein Maurice R. Hilleman Eric R. Kandel Rosalyn Sussman Yalow 1989 Katherine Esau Viktor Hamburger Philip Leder Joshua Lederberg Roger W. Sperry Harland G. Wood 1990s 1990 Baruj Benacerraf Herbert W. Boyer Daniel E. Koshland Jr. Edward B. Lewis David G. Nathan E. Donnall Thomas 1991 Mary Ellen Avery G. Evelyn Hutchinson Elvin A. Kabat Robert W. Kates Salvador Luria Paul A. Marks Folke K. Skoog Paul C. Zamecnik 1992 Maxine Singer Howard Martin Temin 1993 Daniel Nathans Salome G. Waelsch 1994 Thomas Eisner Elizabeth F. Neufeld 1995 Alexander Rich 1996 Ruth Patrick 1997 James Watson Robert A. Weinberg 1998 Bruce Ames Janet Rowley 1999 David Baltimore Jared Diamond Lynn Margulis 2000s 2000 Nancy C. Andreasen Peter H. Raven Carl Woese 2001 Francisco J. Ayala George F. Bass Mario R. Capecchi Ann Graybiel Gene E. Likens Victor A. McKusick Harold Varmus 2002 James E. Darnell Evelyn M. Witkin 2003 J. Michael Bishop Solomon H. Snyder Charles Yanofsky 2004 Norman E. Borlaug Phillip A. Sharp Thomas E. Starzl 2005 Anthony Fauci Torsten N. Wiesel 2006 Rita R. Colwell Nina Fedoroff Lubert Stryer 2007 Robert J. Lefkowitz Bert W. O'Malley 2008 Francis S. Collins Elaine Fuchs J. Craig Venter 2009 Susan L. Lindquist Stanley B. Prusiner 2010s 2010 Ralph L. Brinster Rudolf Jaenisch 2011 Lucy Shapiro Leroy Hood Sallie Chisholm 2012 May Berenbaum Bruce Alberts 2013 Rakesh K. Jain 2014 Stanley Falkow Mary-Claire King Simon Levin Chemistry 1960s 1964 Roger Adams 1980s 1982 F. Albert Cotton Gilbert Stork 1983 Roald Hoffmann George C. Pimentel Richard N. Zare 1986 Harry B. Gray Yuan Tseh Lee Carl S. Marvel Frank H. Westheimer 1987 William S. Johnson Walter H. Stockmayer Max Tishler 1988 William O. Baker Konrad E. Bloch Elias J. Corey 1989 Richard B. Bernstein Melvin Calvin Rudolph A. Marcus Harden M. McConnell 1990s 1990 Elkan Blout Karl Folkers John D. Roberts 1991 Ronald Breslow Gertrude B. Elion Dudley R. Herschbach Glenn T. Seaborg 1992 Howard E. Simmons Jr. 1993 Donald J. Cram Norman Hackerman 1994 George S. Hammond 1995 Thomas Cech Isabella L. Karle 1996 Norman Davidson 1997 Darleane C. Hoffman Harold S. Johnston 1998 John W. Cahn George M. Whitesides 1999 Stuart A. Rice John Ross Susan Solomon 2000s 2000 John D. Baldeschwieler Ralph F. Hirschmann 2001 Ernest R. Davidson Gábor A. Somorjai 2002 John I. Brauman 2004 Stephen J. Lippard 2005 Tobin J. Marks 2006 Marvin H. Caruthers Peter B. Dervan 2007 Mostafa A. El-Sayed 2008 Joanna Fowler JoAnne Stubbe 2009 Stephen J. Benkovic Marye Anne Fox 2010s 2010 Jacqueline K. Barton Peter J. Stang 2011 Allen J. Bard M. Frederick Hawthorne 2012 Judith P. Klinman Jerrold Meinwald 2013 Geraldine L. Richmond 2014 A. Paul Alivisatos Engineering sciences 1960s 1962 Theodore von Kármán 1963 Vannevar Bush John Robinson Pierce 1964 Charles S. Draper Othmar H. Ammann 1965 Hugh L. Dryden Clarence L. Johnson Warren K. Lewis 1966 Claude E. Shannon 1967 Edwin H. Land Igor I. Sikorsky 1968 J. Presper Eckert Nathan M. Newmark 1969 Jack St. Clair Kilby 1970s 1970 George E. Mueller 1973 Harold E. Edgerton Richard T. Whitcomb 1974 Rudolf Kompfner Ralph Brazelton Peck Abel Wolman 1975 Manson Benedict William Hayward Pickering Frederick E. Terman Wernher von Braun 1976 Morris Cohen Peter C. Goldmark Erwin Wilhelm Müller 1979 Emmett N. Leith Raymond D. Mindlin Robert N. Noyce Earl R. Parker Simon Ramo 1980s 1982 Edward H. Heinemann Donald L. Katz 1983 Bill Hewlett George Low John G. Trump 1986 Hans Wolfgang Liepmann Tung-Yen Lin Bernard M. Oliver 1987 Robert Byron Bird H. Bolton Seed Ernst Weber 1988 Daniel C. Drucker Willis M. Hawkins George W. Housner 1989 Harry George Drickamer Herbert E. Grier 1990s 1990 Mildred Dresselhaus Nick Holonyak Jr. 1991 George H. Heilmeier Luna B. Leopold H. Guyford Stever 1992 Calvin F. Quate John Roy Whinnery 1993 Alfred Y. Cho 1994 Ray W. Clough 1995 Hermann A. Haus 1996 James L. Flanagan C. Kumar N. Patel 1998 Eli Ruckenstein 1999 Kenneth N. Stevens 2000s 2000 Yuan-Cheng B. Fung 2001 Andreas Acrivos 2002 Leo Beranek 2003 John M. Prausnitz 2004 Edwin N. Lightfoot 2005 Jan D. Achenbach 2006 Robert S. Langer 2007 David J. Wineland 2008 Rudolf E. Kálmán 2009 Amnon Yariv 2010s 2010 Shu Chien 2011 John B. Goodenough 2012 Thomas Kailath Mathematical, statistical, and computer sciences 1960s 1963 Norbert Wiener 1964 Solomon Lefschetz H. Marston Morse 1965 Oscar Zariski 1966 John Milnor 1967 Paul Cohen 1968 Jerzy Neyman 1969 William Feller 1970s 1970 Richard Brauer 1973 John Tukey 1974 Kurt Gödel 1975 John W. Backus Shiing-Shen Chern George Dantzig 1976 Kurt Otto Friedrichs Hassler Whitney 1979 Joseph L. Doob Donald E. Knuth 1980s 1982 Marshall H. Stone 1983 Herman Goldstine Isadore Singer 1986 Peter Lax Antoni Zygmund 1987 Raoul Bott Michael Freedman 1988 Ralph E. Gomory Joseph B. Keller 1989 Samuel Karlin Saunders Mac Lane Donald C. Spencer 1990s 1990 George F. Carrier Stephen Cole Kleene John McCarthy 1991 Alberto Calderón 1992 Allen Newell 1993 Martin David Kruskal 1994 John Cocke 1995 Louis Nirenberg 1996 Richard Karp Stephen Smale 1997 Shing-Tung Yau 1998 Cathleen Synge Morawetz 1999 Felix Browder Ronald R. Coifman 2000s 2000 John Griggs Thompson Karen Uhlenbeck 2001 Calyampudi R. Rao Elias M. Stein 2002 James G. Glimm 2003 Carl R. de Boor 2004 Dennis P. Sullivan 2005 Bradley Efron 2006 Hyman Bass 2007 Leonard Kleinrock Andrew J. Viterbi 2009 David B. Mumford 2010s 2010 Richard A. Tapia S. R. Srinivasa Varadhan 2011 Solomon W. Golomb Barry Mazur 2012 Alexandre Chorin David Blackwell 2013 Michael Artin Physical sciences 1960s 1963 Luis W. Alvarez 1964 Julian Schwinger Harold Urey Robert Burns Woodward 1965 John Bardeen Peter Debye Leon M. Lederman William Rubey 1966 Jacob Bjerknes Subrahmanyan Chandrasekhar Henry Eyring John H. Van Vleck Vladimir K. Zworykin 1967 Jesse Beams Francis Birch Gregory Breit Louis Hammett George Kistiakowsky 1968 Paul Bartlett Herbert Friedman Lars Onsager Eugene Wigner 1969 Herbert C. Brown Wolfgang Panofsky 1970s 1970 Robert H. Dicke Allan R. Sandage John C. Slater John A. Wheeler Saul Winstein 1973 Carl Djerassi Maurice Ewing Arie Jan Haagen-Smit Vladimir Haensel Frederick Seitz Robert Rathbun Wilson 1974 Nicolaas Bloembergen Paul Flory William Alfred Fowler Linus Carl Pauling Kenneth Sanborn Pitzer 1975 Hans A. Bethe Joseph O. Hirschfelder Lewis Sarett Edgar Bright Wilson Chien-Shiung Wu 1976 Samuel Goudsmit Herbert S. Gutowsky Frederick Rossini Verner Suomi Henry Taube George Uhlenbeck 1979 Richard P. Feynman Herman Mark Edward M. Purcell John Sinfelt Lyman Spitzer Victor F. Weisskopf 1980s 1982 Philip W. Anderson Yoichiro Nambu Edward Teller Charles H. Townes 1983 E. Margaret Burbidge Maurice Goldhaber Helmut Landsberg Walter Munk Frederick Reines Bruno B. Rossi J. Robert Schrieffer 1986 Solomon J. Buchsbaum H. Richard Crane Herman Feshbach Robert Hofstadter Chen-Ning Yang 1987 Philip Abelson Walter Elsasser Paul C. Lauterbur George Pake James A. Van Allen 1988 D. Allan Bromley Paul Ching-Wu Chu Walter Kohn Norman Foster Ramsey Jr. Jack Steinberger 1989 Arnold O. Beckman Eugene Parker Robert Sharp Henry Stommel 1990s 1990 Allan M. Cormack Edwin M. McMillan Robert Pound Roger Revelle 1991 Arthur L. Schawlow Ed Stone Steven Weinberg 1992 Eugene M. Shoemaker 1993 Val Fitch Vera Rubin 1994 Albert Overhauser Frank Press 1995 Hans Dehmelt Peter Goldreich 1996 Wallace S. Broecker 1997 Marshall Rosenbluth Martin Schwarzschild George Wetherill 1998 Don L. Anderson John N. Bahcall 1999 James Cronin Leo Kadanoff 2000s 2000 Willis E. Lamb Jeremiah P. Ostriker Gilbert F. White 2001 Marvin L. Cohen Raymond Davis Jr. Charles Keeling 2002 Richard Garwin W. Jason Morgan Edward Witten 2003 G. Brent Dalrymple Riccardo Giacconi 2004 Robert N. Clayton 2005 Ralph A. Alpher Lonnie Thompson 2006 Daniel Kleppner 2007 Fay Ajzenberg-Selove Charles P. Slichter 2008 Berni Alder James E. Gunn 2009 Yakir Aharonov Esther M. Conwell Warren M. Washington 2010s 2011 Sidney Drell Sandra Faber Sylvester James Gates 2012 Burton Richter Sean C. Solomon 2014 Shirley Ann Jackson Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun Abel Prize laureates • 2003  Jean-Pierre Serre • 2004  Michael Atiyah • Isadore Singer • 2005  Peter Lax • 2006  Lennart Carleson • 2007  S. R. Srinivasa Varadhan • 2008  John G. Thompson • Jacques Tits • 2009  Mikhail Gromov • 2010  John Tate • 2011  John Milnor • 2012  Endre Szemerédi • 2013  Pierre Deligne • 2014  Yakov Sinai • 2015  John Forbes Nash Jr. • Louis Nirenberg • 2016  Andrew Wiles • 2017  Yves Meyer • 2018  Robert Langlands • 2019  Karen Uhlenbeck • 2020  Hillel Furstenberg • Grigory Margulis • 2021  László Lovász • Avi Wigderson • 2022  Dennis Sullivan • 2023  Luis Caffarelli Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Australia • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • Publons • ResearcherID • Scopus • zbMATH People • Trove Other • SNAC • IdRef
Wikipedia
Sullivan conjecture In mathematics, Sullivan conjecture or Sullivan's conjecture on maps from classifying spaces can refer to any of several results and conjectures prompted by homotopy theory work of Dennis Sullivan. A basic theme and motivation concerns the fixed point set in group actions of a finite group $G$. The most elementary formulation, however, is in terms of the classifying space $BG$ of such a group. Roughly speaking, it is difficult to map such a space $BG$ continuously into a finite CW complex $X$ in a non-trivial manner. Such a version of the Sullivan conjecture was first proved by Haynes Miller.[1] Specifically, in 1984, Miller proved that the function space, carrying the compact-open topology, of base point-preserving mappings from $BG$ to $X$ is weakly contractible. This is equivalent to the statement that the map $X$ → $F(BG,X)$ from X to the function space of maps $BG$ → $X$, not necessarily preserving the base point, given by sending a point $x$ of $X$ to the constant map whose image is $x$ is a weak equivalence. The mapping space $F(BG,X)$ is an example of a homotopy fixed point set. Specifically, $F(BG,X)$ is the homotopy fixed point set of the group $G$ acting by the trivial action on $X$. In general, for a group $G$ acting on a space $X$, the homotopy fixed points are the fixed points $F(EG,X)^{G}$ of the mapping space $F(EG,X)$ of maps from the universal cover $EG$ of $BG$ to $X$ under the $G$-action on $F(EG,X)$ given by $g$ in $G$ acts on a map $f$ in $F(EG,X)$ by sending it to $gfg^{-1}$. The $G$-equivariant map from $EG$ to a single point $*$ induces a natural map η: $X^{G}=F(*,X)^{G}$→$F(EG,X)^{G}$ from the fixed points to the homotopy fixed points of $G$ acting on $X$. Miller's theorem is that η is a weak equivalence for trivial $G$-actions on finite-dimensional CW complexes. An important ingredient and motivation for his proof is a result of Gunnar Carlsson on the homology of $BZ/2$ as an unstable module over the Steenrod algebra.[2] Miller's theorem generalizes to a version of Sullivan's conjecture in which the action on $X$ is allowed to be non-trivial. In,[3] Sullivan conjectured that η is a weak equivalence after a certain p-completion procedure due to A. Bousfield and D. Kan for the group $G=Z/2$. This conjecture was incorrect as stated, but a correct version was given by Miller, and proven independently by Dwyer-Miller-Neisendorfer,[4] Carlsson,[5] and Jean Lannes,[6] showing that the natural map $(X^{G})_{p}$ → $F(EG,(X)_{p})^{G}$ is a weak equivalence when the order of $G$ is a power of a prime p, and where $(X)_{p}$ denotes the Bousfield-Kan p-completion of $X$. Miller's proof involves an unstable Adams spectral sequence, Carlsson's proof uses his affirmative solution of the Segal conjecture and also provides information about the homotopy fixed points $F(EG,X)^{G}$ before completion, and Lannes's proof involves his T-functor.[7] References 1. Miller, Haynes (1984). "The Sullivan Conjecture on Maps from Classifying Spaces". Annals of Mathematics. 120 (1): 39–87. doi:10.2307/2007071. JSTOR 2007071. 2. Carlsson, Gunnar (1983). "G.B. Segal's Burnside Ring Conjecture for (Z/2)^k". Topology. 22 (1): 83–103. doi:10.1016/0040-9383(83)90046-0. 3. Sullivan, Denis (1971). Geometric topology. Part I. Cambridge, MA: Massachusetts Institute of Technology Press. p. 432. 4. Dwyer, William; Haynes Miller; Joseph Neisendorfer (1989). "Fibrewise Completion and Unstable Adams Spectral Sequences". Israel Journal of Mathematics. 66 (1–3): 160–178. doi:10.1007/bf02765891. 5. Carlsson, Gunnar (1991). "Equivariant stable homotopy and Sullivan's conjecture". Inventiones Mathematicae. 103: 497–525. doi:10.1007/bf01239524. 6. Lannes, Jean (1992). "Sur les espaces fonctionnels dont la source est le classifiant d'un p-groupe abélien élémentaire". Publications Mathématiques de l'IHÉS. 75: 135–244. doi:10.1007/bf02699494. 7. Schwartz, Lionel (1994). Unstable Modules over the Steenrod Algebra and Sullivan's Fixed Point Set Conjecture. Chicago and London: The University of Chicago Press. ISBN 978-0-226-74203-8. External links • Gottlieb, Daniel H. (2001) [1994], "Sullivan conjecture", Encyclopedia of Mathematics, EMS Press • Book extract • J. Lurie's course notes
Wikipedia
Secretary problem The secretary problem demonstrates a scenario involving optimal stopping theory[1][2] that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem. The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out of $n$ rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximum selection algorithm of tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately. The shortest rigorous proof known so far is provided by the odds algorithm. It implies that the optimal win probability is always at least $1/e$ (where e is the base of the natural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first $\sim n/e$ applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the $1/e$ stopping rule, because the probability of stopping at the best applicant with this strategy is about $1/e$ already for moderate values of $n$. One reason why the secretary problem has received so much attention is that the optimal policy for the problem (the stopping rule) is simple and selects the single best candidate about 37% of the time, irrespective of whether there are 100 or 100 million applicants. Formulation Although there are many variations, the basic problem can be stated as follows: • There is a single position to fill. • There are n applicants for the position, and the value of n is known. • The applicants, if all seen together, can be ranked from best to worst unambiguously. • The applicants are interviewed sequentially in random order, with each order being equally likely. • Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable. • The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far. • The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expected payoff, with payoff defined to be one for the best applicant and zero otherwise. A candidate is defined as an applicant who, when interviewed, is better than all the applicants interviewed previously. Skip is used to mean "reject immediately after the interview". Since the objective in the problem is to select the single best applicant, only candidates will be considered for acceptance. The "candidate" in this context corresponds to the concept of record in permutation. Deriving the optimal policy The optimal policy for the problem is a stopping rule. Under it, the interviewer rejects the first r − 1 applicants (let applicant M be the best applicant among these r − 1 applicants), and then selects the first subsequent applicant that is better than applicant M. It can be shown that the optimal strategy lies in this class of strategies. (Note that we should never choose an applicant who is not the best we have seen so far, since they cannot be the best overall applicant.) For an arbitrary cutoff r, the probability that the best applicant is selected is ${\begin{aligned}P(r)&=\sum _{i=1}^{n}P\left({\text{applicant }}i{\text{ is selected}}\cap {\text{applicant }}i{\text{ is the best}}\right)\\&=\sum _{i=1}^{n}P\left({\text{applicant }}i{\text{ is selected}}|{\text{applicant }}i{\text{ is the best}}\right)\cdot P\left({\text{applicant }}i{\text{ is the best}}\right)\\&=\left[\sum _{i=1}^{r-1}0+\sum _{i=r}^{n}P\left(\left.{\begin{array}{l}{\text{the best of the first }}i-1{\text{ applicants}}\\{\text{is in the first }}r-1{\text{ applicants}}\end{array}}\right|{\text{applicant }}i{\text{ is the best}}\right)\right]\cdot {\frac {1}{n}}\\&=\left[\sum _{i=r}^{n}{\frac {r-1}{i-1}}\right]\cdot {\frac {1}{n}}\\&={\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}.\end{aligned}}$ The sum is not defined for r = 1, but in this case the only feasible policy is to select the first applicant, and hence P(1) = 1/n. This sum is obtained by noting that if applicant i is the best applicant, then it is selected if and only if the best applicant among the first i − 1 applicants is among the first r − 1 applicants that were rejected. Letting n tend to infinity, writing $x$ as the limit of (r−1)/n, using t for (i−1)/n and dt for 1/n, the sum can be approximated by the integral $P(x)=x\int _{x}^{1}{\frac {1}{t}}\,dt=-x\ln(x)\;.$ Taking the derivative of P(x) with respect to $x$, setting it to 0, and solving for x, we find that the optimal x is equal to 1/e. Thus, the optimal cutoff tends to n/e as n increases, and the best applicant is selected with probability 1/e. For small values of n, the optimal r can also be obtained by standard dynamic programming methods. The optimal thresholds r and probability of selecting the best alternative P for several values of n are shown in the following table.[note 1] $n$ 1 2 3 4 5 6 7 8 9 10 $r$ 1 1 2 2 3 3 3 4 4 4 $P$ 1.000 0.500 0.500 0.458 0.433 0.428 0.414 0.410 0.406 0.399 The probability of selecting the best applicant in the classical secretary problem converges toward $1/e\approx 0.368$. Alternative solution This problem and several modifications can be solved (including the proof of optimality) in a straightforward manner by the odds algorithm, which also has other applications. Modifications for the secretary problem that can be solved by this algorithm include random availabilities of applicants, more general hypotheses for applicants to be of interest to the decision maker, group interviews for applicants, as well as certain models for a random number of applicants. Limitations The solution of the secretary problem is only meaningful if it is justified to assume that the applicants have no knowledge of the decision strategy employed, because early applicants have no chance at all and may not show up otherwise. One important drawback for applications of the solution of the classical secretary problem is that the number of applicants $n$ must be known in advance, which is rarely the case. One way to overcome this problem is to suppose that the number of applicants is a random variable $N$ with a known distribution of $P(N=k)_{k=1,2,\cdots }$ (Presman and Sonin, 1972). For this model, the optimal solution is in general much harder, however. Moreover, the optimal success probability is now no longer around 1/e but typically lower. This can be understood in the context of having a "price" to pay for not knowing the number of applicants. However, in this model the price is high. Depending on the choice of the distribution of $N$, the optimal win probability can approach zero. Looking for ways to cope with this new problem led to a new model yielding the so-called 1/e-law of best choice. 1/e-law of best choice The essence of the model is based on the idea that life is sequential and that real-world problems pose themselves in real time. Also, it is easier to estimate times in which specific events (arrivals of applicants) should occur more frequently (if they do) than to estimate the distribution of the number of specific events which will occur. This idea led to the following approach, the so-called unified approach (1984): The model is defined as follows: An applicant must be selected on some time interval $[0,T]$ from an unknown number $N$ of rankable applicants. The goal is to maximize the probability of selecting only the best under the hypothesis that all arrival orders of different ranks are equally likely. Suppose that all applicants have the same, but independent to each other, arrival time density $f$ on $[0,T]$ and let $F$ denote the corresponding arrival time distribution function, that is $F(t)=\int _{0}^{t}f(s)ds$, $\,0\leq t\leq T$. Let $\tau $ be such that $F(\tau )=1/e.$ Consider the strategy to wait and observe all applicants up to time $\tau $ and then to select, if possible, the first candidate after time $\tau $ which is better than all preceding ones. Then this strategy, called 1/e-strategy, has the following properties: The 1/e-strategy (i) yields for all $N$ a success probability of at least 1/e, (ii) is a minimax-optimal strategy for the selector who does not know $N$, (iii) selects, if there is at least one applicant, none at all with probability exactly 1/e. The 1/e-law, proved in 1984 by F. Thomas Bruss, came as a surprise. The reason was that a value of about 1/e had been considered before as being out of reach in a model for unknown $N$, whereas this value 1/e was now achieved as a lower bound for the success probability, and this in a model with arguably much weaker hypotheses (see e.g. Math. Reviews 85:m). However, there are many other strategies that achieve (i) and (ii) and, moreover, perform strictly better than the 1/e-strategy simultaneously for all $N$>2. A simple example is the strategy which selects (if possible) the first relatively best candidate after time $\tau $ provided that at least one applicant arrived before this time, and otherwise selects (if possible) the second relatively best candidate after time $\tau $.[3] The 1/e-law is sometimes confused with the solution for the classical secretary problem described above because of the similar role of the number 1/e. However, in the 1/e-law, this role is more general. The result is also stronger, since it holds for an unknown number of applicants and since the model based on an arrival time distribution F is more tractable for applications. The game of googol In the article "Who solved the Secretary problem?" (Ferguson, 1989)[1], it's claimed the secretary problem first appeared in print in Martin Gardner's February 1960 Mathematical Games column in Scientific American: Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred zeroes) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.[4] Ferguson pointed out that the secretary game remained unsolved, as a zero-sum game with two antagonistic players.[1] In this game: • Alice, the informed player, writes secretly distinct numbers on $n$ cards. • Bob, the stopping player, observes the actual values and can stop turning cards whenever he wants, winning if the last card turned has the overall maximal number. • Bob wants to guess the maximal number with the highest possible probability, while Alice's goal is to keep this probability as low as possible. The difference with the basic secretary problem are two: • Alice does not have to write numbers uniformly at random. She may write them according to any joint probability distribution to trick Bob. • Bob observes the actual values written on the cards, which he can use in his decision procedures. Strategic analysis Alice first writes down n numbers, which are then shuffled. So, their ordering does not matter, meaning that Alice's numbers must be an exchangeable random variable sequence $X_{1},X_{2},...,X_{n}$. Alice's strategy is then just picking the trickiest exchangeable random variable sequence. Bob's strategy is formalizable as a stopping rule $\tau $ for the sequence $X_{1},X_{2},...,X_{n}$. We say that a stopping rule $\tau $ for Bob is a relative rank stopping strategy iff it depends on only the relative ranks of $X_{1},X_{2},...,X_{n}$, and not on their numerical values. In other words, it is as if someone secretly intervened after Alice picked her numbers, and changed each number in $X_{1},X_{2},...,X_{n}$ into its relative rank (breaking ties randomly). For example, $0.2,0.3,0.3,0.1$ is changed to $2,3,4,1$ or $2,4,3,1$ with equal probability. This makes it as if Alice played an exchangeable random permutation on $\{1,2,...,n\}$. Now, since the only exchangeable random permutation on $\{1,2,...,n\}$ is just the uniform distribution over all permutations on $\{1,2,...,n\}$, the optimal relative rank stopping strategy is the optimal stopping rule for the secretary problem, given above, with a winning probability $Pr(X_{\tau }=\max _{i\in 1:n}X_{i})=\max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}$ Alice's goal then is to make sure Bob cannot do better than the relative-rank stopping strategy. By the rules of the game, Alice's sequence must be exchangeable, but to do well in the game, Alice should not pick it to be independent. If Alice samples the numbers independently from some fixed distribution, it would allow Bob to do better. To see this intuitively, imagine if $n=2$, and Alice is to pick both numbers from the normal distribution $N(0,1)$, independently. Then if Bob turns over one number and sees $-3$, then he can quite confidently turn over the second number, and if Bob turns over one number and sees $+3$, then he can quite confidently pick the first number. Alice can do better by picking $X_{1},X_{2}$ that are positively correlated. So the fully formal statement is as below: Does there exist an exchangeable sequence of random variables $X_{1},...,X_{n}$, such that for any stopping rule $\tau $, $Pr(X_{\tau }=\max _{i\in 1:n}X_{i})\leq \max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}$ ? Solution For $n=2$, if Bob plays the optimal relative-rank stoppings strategy, then Bob has winning probability 1/2. Surprisingly, Alice has no minimax strategy, which is closely related to a paradox of T. Cover[5] and the two envelopes paradox. Concretely, Bob can play this strategy: sample a random number $Y$. If $X_{1}>Y$, then pick $X_{1}$, else pick $X_{2}$. Now, Bob can win with probability strictly greater than 1/2. Suppose Alice's numbers are different, then conditional on $Y\not \in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]$, Bob wins with probability 1/2, but conditional on $Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]$, Bob wins with probability 1. Note that the random number $Y$ can be sampled from any random distribution, as long as $Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]$ has nonzero probability. However, for any $\epsilon >0$, Alice can construct an exchangeable sequence $X_{1},X_{2}$ such that Bob's winning probability is at most $1/2+\epsilon $.[1] But for $n>2$, the answer is yes: Alice can choose random numbers (which are dependent random variables) in such a way that Bob cannot play better than using the classical stopping strategy based on the relative ranks.[6] Heuristic performance The remainder of the article deals again with the secretary problem for a known number of applicants. Stein, Seale & Rapoport 2003 derived the expected success probabilities for several psychologically plausible heuristics that might be employed in the secretary problem. The heuristics they examined were: • The cutoff rule (CR): Do not accept any of the first y applicants; thereafter, select the first encountered candidate (i.e., an applicant with relative rank 1). This rule has as a special case the optimal policy for the classical secretary problem for which y = r. • Candidate count rule (CCR): Select the y-th encountered candidate. Note, that this rule does not necessarily skip any applicants; it only considers how many candidates have been observed, not how deep the decision maker is in the applicant sequence. • Successive non-candidate rule (SNCR): Select the first encountered candidate after observing y non-candidates (i.e., applicants with relative rank > 1). Each heuristic has a single parameter y. The figure (shown on right) displays the expected success probabilities for each heuristic as a function of y for problems with n = 80. Cardinal payoff variant Finding the single best applicant might seem like a rather strict objective. One can imagine that the interviewer would rather hire a higher-valued applicant than a lower-valued one, and not only be concerned with getting the best. That is, the interviewer will derive some value from selecting an applicant that is not necessarily the best, and the derived value increases with the value of the one selected. To model this problem, suppose that the $n$ applicants have "true" values that are random variables X drawn i.i.d. from a uniform distribution on [0, 1]. Similar to the classical problem described above, the interviewer only observes whether each applicant is the best so far (a candidate), must accept or reject each on the spot, and must accept the last one if he/she is reached. (To be clear, the interviewer does not learn the actual relative rank of each applicant. He/she learns only whether the applicant has relative rank 1.) However, in this version the payoff is given by the true value of the selected applicant. For example, if he/she selects an applicant whose true value is 0.8, then he/she will earn 0.8. The interviewer's objective is to maximize the expected value of the selected applicant. Since the applicant's values are i.i.d. draws from a uniform distribution on [0, 1], the expected value of the tth applicant given that $x_{t}=\max \left\{x_{1},x_{2},\ldots ,x_{t}\right\}$ is given by $E_{t}=E\left(X_{t}|I_{t}=1\right)={\frac {t}{t+1}}.$ As in the classical problem, the optimal policy is given by a threshold, which for this problem we will denote by $c$, at which the interviewer should begin accepting candidates. Bearden showed that c is either $\lfloor {\sqrt {n}}\rfloor $ or $\lceil {\sqrt {n}}\rceil $.[7] (In fact, whichever is closest to ${\sqrt {n}}$.) This follows from the fact that given a problem with $n$ applicants, the expected payoff for some arbitrary threshold $1\leq c\leq n$ is $V_{n}(c)=\sum _{t=c}^{n-1}\left[\prod _{s=c}^{t-1}\left({\frac {s-1}{s}}\right)\right]\left({\frac {1}{t+1}}\right)+\left[\prod _{s=c}^{n-1}\left({\frac {s-1}{s}}\right)\right]{\frac {1}{2}}={\frac {2cn-{c}^{2}+c-n}{2cn}}.$ Differentiating $V_{n}(c)$ with respect to c, one gets ${\frac {\partial V}{\partial c}}={\frac {-{c}^{\,2}+n}{2{c}^{\,2}n}}.$ Since $\partial ^{\,2}V/\partial c^{\,2}<0$ for all permissible values of $c$, we find that $V$ is maximized at $c={\sqrt {n}}$. Since V is convex in $c$, the optimal integer-valued threshold must be either $\lfloor {\sqrt {n}}\rfloor $ or $\lceil {\sqrt {n}}\rceil $. Thus, for most values of $n$ the interviewer will begin accepting applicants sooner in the cardinal payoff version than in the classical version where the objective is to select the single best applicant. Note that this is not an asymptotic result: It holds for all $n$. However, this is not the optimal policy to maximize expected value from a known distribution. In the case of a known distribution, optimal play can be calculated via dynamic programming. A more general form of this problem introduced by Palley and Kremer (2014)[8] assumes that as each new applicant arrives, the interviewer observes their rank relative to all of the applicants that have been observed previously. This model is consistent with the notion of an interviewer learning as they continue the search process by accumulating a set of past data points that they can use to evaluate new candidates as they arrive. A benefit of this so-called partial-information model is that decisions and outcomes achieved given the relative rank information can be directly compared to the corresponding optimal decisions and outcomes if the interviewer had been given full information about the value of each applicant. This full-information problem, in which applicants are drawn independently from a known distribution and the interviewer seeks to maximize the expected value of the applicant selected, was originally solved by Moser (1956),[9] Sakaguchi (1961),[10] and Karlin (1962). Other modifications There are several variants of the secretary problem that also have simple and elegant solutions. Pick the second-best, using one try One variant replaces the desire to pick the best with the desire to pick the second-best.[11][12][13] Robert J. Vanderbei calls this the "postdoc" problem arguing that the "best" will go to Harvard. For this problem, the probability of success for an even number of applicants is exactly ${\frac {0.25n^{2}}{n(n-1)}}$. This probability tends to 1/4 as n tends to infinity illustrating the fact that it is easier to pick the best than the second-best. Pick the top-k ones, using k tries Consider the problem of picking the k best secretaries out of n candidates, using k tries. In general, the optimal decision method starts by observing $r=\left\lfloor {\frac {n}{ke^{1/k}}}\right\rfloor $ candidates without picking any one of them, then pick every candidate that is better than those first $r$ candidates until we run out of candidates or picks. If $k$ is held constant while $n\to \infty $, then the probability of success converges to ${\frac {1}{ek}}$.[14] By Vanderbei 1980, if $k=n/2$, then the probability of success is ${\frac {1}{n/2+1}}$. Pick the best, using multiple tries In this variant, a player is allowed $r$ choices and wins if any choice is the best. An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers $(a_{1},a_{2},...,a_{r})$, where $a_{1}>a_{2}>\cdots >a_{r}$. Specifically, imagine that you have $r$ letters of acceptance labelled from $1$ to $r$. You would have $r$ application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officer $i$ would send their letter of acceptance to the first candidate that is better than all candidates $1$ to $a_{i}$. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)[15] At $n\rightarrow \infty $ limit, each $a_{i}\sim ne^{-k_{i}}$, for some rational number $k_{i}$.[16] Probability of winning When $r=2$, the probability of winning converges to $e^{-1}+e^{-{\frac {3}{2}}},(n\rightarrow \infty )$. More generally, for positive integers $r$, the probability of winning converges to $p_{1}+p_{2}+\cdots +p_{r}$, where $p_{i}=\lim _{n\rightarrow \infty }{\frac {a_{i}}{n}}$.[16] [15] computed up to $r=4$, with $e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}$. Matsui & Ano 2016 gave a general algorithm. For example, $p_{5}=e^{-{\frac {4162637}{1474560}}}$. Experimental studies Experimental psychologists and economists have studied the decision behavior of actual people in secretary problem situations.[17] In large part, this work has shown that people tend to stop searching too soon. This may be explained, at least in part, by the cost of evaluating candidates. In real world settings, this might suggest that people do not search enough whenever they are faced with problems where the decision alternatives are encountered sequentially. For example, when trying to decide at which gas station along a highway to stop for gas, people might not search enough before stopping. If true, then they would tend to pay more for gas than if they had searched longer. The same may be true when people search online for airline tickets. Experimental research on problems such as the secretary problem is sometimes referred to as behavioral operations research. Neural correlates While there is a substantial body of neuroscience research on information integration, or the representation of belief, in perceptual decision-making tasks using both animal[18][19] and human subjects,[20] there is relatively little known about how the decision to stop gathering information is arrived at. Researchers have studied the neural bases of solving the secretary problem in healthy volunteers using functional MRI.[21] A Markov decision process (MDP) was used to quantify the value of continuing to search versus committing to the current option. Decisions to take versus decline an option engaged parietal and dorsolateral prefrontal cortices, as well ventral striatum, anterior insula, and anterior cingulate. Therefore, brain regions previously implicated in evidence integration and reward representation encode threshold crossings that trigger decisions to commit to a choice. History The secretary problem was apparently introduced in 1949 by Merrill M. Flood, who called it the fiancée problem in a lecture he gave that year. He referred to it several times during the 1950s, for example, in a conference talk at Purdue on 9 May 1958, and it eventually became widely known in the folklore although nothing was published at the time. In 1958 he sent a letter to Leonard Gillman, with copies to a dozen friends including Samuel Karlin and J. Robbins, outlining a proof of the optimum strategy, with an appendix by R. Palermo who proved that all strategies are dominated by a strategy of the form "reject the first p unconditionally, then accept the next candidate who is better".[22] The first publication was apparently by Martin Gardner in Scientific American, February 1960. He had heard about it from John H. Fox Jr., and L. Gerald Marnie, who had independently come up with an equivalent problem in 1958; they called it the "game of googol". Fox and Marnie did not know the optimum solution; Gardner asked for advice from Leo Moser, who (together with J. R. Pounder) provided a correct analysis for publication in the magazine. Soon afterwards, several mathematicians wrote to Gardner to tell him about the equivalent problem they had heard via the grapevine, all of which can most likely be traced to Flood's original work.[23] The 1/e-law of best choice is due to F. Thomas Bruss.[24] Ferguson has an extensive bibliography and points out that a similar (but different) problem had been considered by Arthur Cayley in 1875 and even by Johannes Kepler long before that, who spent 2 years investigating 11 candidates for marriage during 1611 -- 1613 after the death of his first wife.[25] Combinatorial generalization The secretary problem can be generalized to the case where there are multiple different jobs. Again, there are $n$ applicants coming in random order. When a candidate arrives, she reveals a set of nonnegative numbers. Each value specifies her qualification for one of the jobs. The administrator not only has to decide whether or not to take the applicant but, if so, also has to assign her permanently to one of the jobs. The objective is to find an assignment where the sum of qualifications is as big as possible. This problem is identical to finding a maximum-weight matching in an edge-weighted bipartite graph where the $n$ nodes of one side arrive online in random order. Thus, it is a special case of the online bipartite matching problem. By a generalization of the classic algorithm for the secretary problem, it is possible to obtain an assignment where the expected sum of qualifications is only a factor of $e$ less than an optimal (offline) assignment.[26] See also • Assignment problem • Odds algorithm • Optimal stopping • Robbins' problem • Search theory • Stable marriage problem Notes 1. Ferguson, Thomas S. (August 1989). "Who Solved the Secretary Problem?". Statistical Science. 4 (3): 282–289. doi:10.1214/ss/1177012493. 2. Hill, Theodore P. (2009). "Knowing When to Stop". American Scientist. 97 (2): 126–133. doi:10.1511/2009.77.126. ISSN 1545-2786. S2CID 124798270. For French translation, see cover story in the July issue of Pour la Science (2009). 3. Gnedin 2021. 4. Gardner 1966. 5. Cover, Thomas M. (1987), Cover, Thomas M.; Gopinath, B. (eds.), "Pick the Largest Number", Open Problems in Communication and Computation, New York, NY: Springer, pp. 152–152, doi:10.1007/978-1-4612-4808-8_43, ISBN 978-1-4612-4808-8, retrieved 25 June 2023 6. Gnedin 1994. 7. Bearden 2006. 8. Palley, Asa B.; Kremer, Mirko (8 July 2014). "Sequential Search and Learning from Rank Feedback: Theory and Experimental Evidence". Management Science. 60 (10): 2525–2542. doi:10.1287/mnsc.2014.1902. ISSN 0025-1909. 9. Moser, Leo (1956). "On a problem of Cayley". Scripta Math. 22: 289–292. 10. Sakaguchi, Minoru (1 June 1961). "Dynamic programming of some sequential sampling design". Journal of Mathematical Analysis and Applications. 2 (3): 446–466. doi:10.1016/0022-247X(61)90023-3. ISSN 0022-247X. 11. Rose, John S. (1982). "Selection of nonextremal candidates from a random sequence". J. Optim. Theory Appl. 38 (2): 207–219. doi:10.1007/BF00934083. ISSN 0022-3239. S2CID 121339045. 12. Szajowski, Krzysztof (1982). "Optimal choice of an object with ath rank". Matematyka Stosowana. Annales Societatis Mathematicae Polonae, Series III. 10 (19): 51–65. doi:10.14708/ma.v10i19.1533. ISSN 0137-2890. 13. Vanderbei, Robert J. (21 June 2021). "The postdoc variant of the secretary problem". Mathematica Applicanda. Annales Societatis Mathematicae Polonae, Series III. 49 (1): 3–13. doi:10.14708/ma.v49i1.7076. ISSN 2299-4009.{{cite journal}}: CS1 maint: date and year (link) 14. Girdhar & Dudek 2009. 15. Gilbert & Mosteller 1966. 16. Matsui & Ano 2016. 17. Bearden, Murphy, and Rapoport, 2006; Bearden, Rapoport, and Murphy, 2006; Seale and Rapoport, 1997; Palley and Kremer, 2014 18. Shadlen, M. N.; Newsome, W. T. (23 January 1996). "Motion perception: seeing and deciding". Proceedings of the National Academy of Sciences. 93 (2): 628–633. Bibcode:1996PNAS...93..628S. doi:10.1073/pnas.93.2.628. PMC 40102. PMID 8570606. 19. Roitman, Jamie D.; Shadlen, Michael N. (1 November 2002). "Response of Neurons in the Lateral Intraparietal Area during a Combined Visual Discrimination Reaction Time Task". The Journal of Neuroscience. 22 (21): 9475–9489. doi:10.1523/JNEUROSCI.22-21-09475.2002. PMC 6758024. PMID 12417672. 20. Heekeren, Hauke R.; Marrett, Sean; Ungerleider, Leslie G. (9 May 2008). "The neural systems that mediate human perceptual decision making". Nature Reviews Neuroscience. 9 (6): 467–479. doi:10.1038/nrn2374. PMID 18464792. S2CID 7416645. 21. Costa, V. D.; Averbeck, B. B. (18 October 2013). "Frontal-Parietal and Limbic-Striatal Activity Underlies Information Sampling in the Best Choice Problem". Cerebral Cortex. 25 (4): 972–982. doi:10.1093/cercor/bht286. PMC 4366612. PMID 24142842. 22. Flood 1958. 23. Gardner 1966, Problem 3. 24. Bruss 1984. 25. Ferguson 1989. 26. Kesselheim, Thomas; Radke, Klaus; Tönnis, Andreas; Vöcking, Berthold (2013). "An Optimal Online Algorithm for Weighted Bipartite Matching and Extensions to Combinatorial Auctions". Algorithms – ESA 2013. Lecture Notes in Computer Science. Vol. 8125. pp. 589–600. doi:10.1007/978-3-642-40450-4_50. ISBN 978-3-642-40449-8. References • Bearden, J.N. (2006). "A new secretary problem with rank-based selection and cardinal payoffs". Journal of Mathematical Psychology. 50: 58–9. doi:10.1016/j.jmp.2005.11.003. • Bearden, J.N.; Murphy, R.O.; Rapoport, A. (2005). "A multi-attribute extension of the secretary problem: Theory and experiments". Journal of Mathematical Psychology. 49 (5): 410–425. CiteSeerX 10.1.1.497.6468. doi:10.1016/j.jmp.2005.08.002. S2CID 9186039. • Bearden, J. Neil; Rapoport, Amnon; Murphy, Ryan O. (September 2006). "Sequential Observation and Selection with Rank-Dependent Payoffs: An Experimental Study". Management Science. 52 (9): 1437–1449. doi:10.1287/mnsc.1060.0535. • Bruss, F. Thomas (June 2000). "Sum the odds to one and stop". The Annals of Probability. 28 (3): 1384–1391. doi:10.1214/aop/1019160340. • Bruss, F. Thomas (October 2003). "A note on bounds for the odds theorem of optimal stopping". The Annals of Probability. 31 (4): 1859–1961. doi:10.1214/aop/1068646368. • Bruss, F. Thomas (August 1984). "A Unified Approach to a Class of Best Choice Problems with an Unknown Number of Options". The Annals of Probability. 12 (3): 882–889. doi:10.1214/aop/1176993237. • Flood, Merrill R. (1958). "Proof of the optimum strategy". Letter to Martin Gardner. Martin Gardner papers series 1, box 5, folder 19: Stanford University Archives.{{cite press release}}: CS1 maint: location (link) • Freeman, P.R. (1983). "The secretary problem and its extensions: A review". International Statistical Review / Revue Internationale de Statistique. 51 (2): 189–206. doi:10.2307/1402748. JSTOR 1402748. • Gardner, Martin (1966). "3". New Mathematical Diversions from Scientific American. Simon and Schuster. [reprints his original column published in February 1960 with additional comments] • Girdhar, Yogesh; Dudek, Gregory (2009). "Optimal Online Data Sampling or How to Hire the Best Secretaries". 2009 Canadian Conference on Computer and Robot Vision. pp. 292–298. CiteSeerX 10.1.1.161.41. doi:10.1109/CRV.2009.30. ISBN 978-1-4244-4211-9. S2CID 2742443. • Gilbert, J; Mosteller, F (1966). "Recognizing the Maximum of a Sequence". Journal of the American Statistical Association. 61 (313): 35–73. doi:10.2307/2283044. JSTOR 2283044. • Gnedin, A. (1994). "A solution to the game of Googol". Annals of Probability. 22 (3): 1588–1595. doi:10.1214/aop/1176988613. • Gnedin, A. (2021). "The best choice problem with random arrivals: How to beat the 1/e-strategy". Stochastic Processes and Their Applications. 145: 226–240. doi:10.1016/j.spa.2021.12.008. S2CID 245449000. • Hill, T.P. "Knowing When to Stop". American Scientist, Vol. 97, 126-133 (2009). (For French translation, see cover story in the July issue of Pour la Science (2009)) • Ketelaar, Timothy; Todd, Peter M. (2001). "Framing Our Thoughts: Ecological Rationality as Evolutionary Psychology's Answer to the Frame Problem". Conceptual Challenges in Evolutionary Psychology. Studies in Cognitive Systems. Vol. 27. pp. 179–211. doi:10.1007/978-94-010-0618-7_7. ISBN 978-94-010-3890-4. • Matsui, T; Ano, K (2016). "Lower bounds for Bruss' odds problem with multiple stoppings". Mathematics of Operations Research. 41 (2): 700–714. arXiv:1204.5537. doi:10.1287/moor.2015.0748. S2CID 31778896. • Miller, Geoffrey F. (2001). The mating mind: how sexual choice shaped the evolution of human nature. Anchor Books. ISBN 978-0-385-49517-2. • Sardelis, Dimitris A.; Valahas, Theodoros M. (March 1999). "Decision Making: A Golden Rule". The American Mathematical Monthly. 106 (3): 215. doi:10.2307/2589677. JSTOR 2589677. • Seale, D.A.; Rapoport, A. (1997). "Sequential decision making with relative ranks: An experimental investigation of the 'secretary problem'". Organizational Behavior and Human Decision Processes. 69 (3): 221–236. doi:10.1006/obhd.1997.2683. • Stein, W.E.; Seale, D.A.; Rapoport, A. (2003). "Analysis of heuristic solutions to the best choice problem". European Journal of Operational Research. 151: 140–152. doi:10.1016/S0377-2217(02)00601-X. • Vanderbei, R. J. (November 1980). "The Optimal Choice of a Subset of a Population". Mathematics of Operations Research. 5 (4): 481–486. doi:10.1287/moor.5.4.481. • Vanderbei, Robert J. (2012). "The postdoc variant of the secretary problem" (PDF). CiteSeerX 10.1.1.366.1718. {{cite journal}}: Cite journal requires |journal= (help) External links • OEIS sequence A054404 (Number of daughters to wait before picking in sultan's dowry problem with n daughters) • Weisstein, Eric W. "Sultan's Dowry Problem". MathWorld. • Neil Bearden. "Optimal Search (Secretary Problems)". Archived from the original on 4 January 2017. • Optimal Stopping and Applications book by Thomas S. Ferguson Notes 1. import numpy as np import pandas as pd # Define the function for which you want to find the maximum def func(r, n): if r == 1: return 0 else: return (r - 1) / n * np.sum([1 / (i - 1) for i in range(r, n+1)]) # Define a function to solve the problem for a specific n def solve(n): values = [func(r, n) for r in range(1, n+1)] r_max = np.argmax(values) + 1 return r_max, values[r_max - 1] # Define a function to print the results as a Markdown table def print_table(n_max): # Prepare data for the table data = [solve(n) for n in range(1, n_max+1)] df = pd.DataFrame(data, columns=['r', 'Max Value'], index=range(1, n_max+1)) df.index.name = 'n' # Convert the DataFrame to Markdown and print print(df.transpose().to_markdown()) # Print the table for n from 1 to 10 print_table(10) Authority control: National • Israel • United States
Wikipedia
Empty sum In mathematics, an empty sum, or nullary sum,[1] is a summation where the number of terms is zero. The natural way to extend non-empty sums[2] is to let the empty sum be the additive identity. Not to be confused with Zero sum. Let $a_{1}$, $a_{2}$, $a_{3}$, ... be a sequence of numbers, and let $s_{m}=\sum _{i=1}^{m}a_{i}=a_{1}+\cdots +a_{m}$ be the sum of the first m terms of the sequence. This satisfies the recurrence $s_{m}=s_{m-1}+a_{m}$ provided that we use the following natural convention: $s_{0}=0$. In other words, a "sum" $s_{1}$ with only one term evaluates to that one term, while a "sum" $s_{0}$ with no terms evaluates to 0. Allowing a "sum" with only 1 or 0 terms reduces the number of cases to be considered in many mathematical formulas. Such "sums" are natural starting points in induction proofs, as well as in algorithms. For these reasons, the "empty sum is zero" extension is standard practice in mathematics and computer programming (assuming the domain has a zero element). For the same reason, the empty product is taken to be the multiplicative identity. For sums of other objects (such as vectors, matrices, polynomials), the value of an empty summation is taken to be its additive identity. Examples Empty linear combinations In linear algebra, a basis of a vector space V is a linearly independent subset B such that every element of V is a linear combination of B. The empty sum convention allows the zero-dimensional vector space V={0} to have a basis, namely the empty set. See also • Empty product • Iterated binary operation • Empty function References 1. Harper, Robert (2016). Practical Foundations for Programming Languages. Cambridge University Press. p. 86. ISBN 9781107029576. 2. David M. Bloom (1979). Linear Algebra and Geometry. pp. 45. ISBN 0521293243.
Wikipedia