text
stringlengths
100
500k
subset
stringclasses
4 values
Sum Sum most commonly means the total of two or more numbers added together; see addition. Look up sum in Wiktionary, the free dictionary. Sum can also refer to: Mathematics • Sum (category theory), the generic concept of summation in mathematics • Sum, the result of summation, the addition of a sequence of numbers • 3SUM, a term from computational complexity theory • Band sum, a way of connecting mathematical knots • Connected sum, a way of gluing manifolds • Digit sum, in number theory • Direct sum, a combination of algebraic objects • Direct sum of groups • Direct sum of modules • Direct sum of permutations • Direct sum of topological groups • Einstein summation, a way of contracting tensor indices • Empty sum, a sum with no terms • Indefinite sum, the inverse of a finite difference • Kronecker sum, an operation considered a kind of addition for matrices • Matrix addition, in linear algebra • Minkowski addition, a sum of two subsets of a vector space • Power sum symmetric polynomial, in commutative algebra • Prefix sum, in computing • Pushout (category theory) (also called an amalgamated sum or a cocartesian square, fibered coproduct, or fibered sum), the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domainor pushout, leading to a fibered sum in category theory • QCD sum rules, in quantum field theory • Riemann sum, in calculus • Rule of sum, in combinatorics • Subset sum problem, in cryptography • Sum rule in differentiation, in calculus • Sum rule in integration, in calculus • Sum rule in quantum mechanics • Wedge sum, a one-point union of topological spaces • Whitney sum, of fiber bundles • Zero-sum problem in combinatorics Computing and technology • Sum (Unix), a program for generating checksums • StartUp-Manager, a program to configure GRUB, GRUB2, Usplash and Splashy • Sum type, a computer science term Art and entertainment • Sum, the first beat (pronounced like "some") in any rhythmic cycle of Hindustani classical music • "Sum", a song by Pink Floyd from The Endless River • Sum: Forty Tales from the Afterlives, a 2009 collection of short stories by David Eagleman • Sum 41, a Canadian punk band • SUM, the computer in Goat Song (novelette) story by Poul Anderson in Magazine of Fantasy and Science Fiction, (1972). Organizations • Senter for utvikling og miljø (Centre for Development and the Environment), a research institute which is part of the University of Oslo • Soccer United Marketing, the for-profit marketing arm of Major League Soccer and the exclusive marketing partner of the United States Soccer Federation • Society for the Establishment of Useful Manufactures, a now-defunct private state-sponsored corporation founded in 1791 to promote industrial development along the Passaic River in New Jersey in the United States • The State University of Management, a Russian university • Save Uganda Movement, a Ugandan militant opposition group Places • Sum (administrative division), an administrative division in Mongolia, China, and some areas of Russia • Sum (Mongolia), an administrative division of Mongolia • SUM, the IATA airport code for the Sumter Airport in Sumter County, South Carolina, U.S. Other uses • Sum, an old name for the Finns in East Slavic languages, derived from the word Suomi, "Finland" • Soum (currency) (also spelled "sum"), a unit of currency used in some Turkic-speaking countries of Central Asia • SUM (interbank network), an interbank network in 42 U.S. states • SUM, the ISO 639-3 code for the Sumo language • Cen (surname), sometimes Romanized Sum • Cogito, ergo sum, Latin for: "I think, therefore I am" • Sum certain, a legal term See also • Addition • Additive category • Preadditive category
Wikipedia
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any $\ m\times n\ $ matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an $\ m\times n\ $ complex matrix M is a factorization of the form $\ \mathbf {M} =\mathbf {U\Sigma V^{*}} \ ,$ where U is an $\ m\times m\ $ complex unitary matrix, $\ \mathbf {\Sigma } \ $ is an $\ m\times n\ $ rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an $n\times n$ complex unitary matrix, and $\ \mathbf {V^{*}} \ $ is the conjugate transpose of V. Such decomposition always exists for any complex matrix. If M is real, then U and V can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted $\ \mathbf {U\Sigma V} ^{\mathsf {T}}\ .$ The diagonal entries $\ \sigma _{i}=\Sigma _{ii}\ $ of $\ \mathbf {\Sigma } \ $ are uniquely determined by M and are known as the singular values of M. The number of non-zero singular values is equal to the rank of M. The columns of U and the columns of V are called left-singular vectors and right-singular vectors of M, respectively. They form two sets of orthonormal bases u1, ..., um and v1, ..., vn , and if they are sorted so that the singular values $\ \sigma _{i}\ $ with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as $\ \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*}\ ,$ where $\ r\leq \min\{m,n\}\ $ is the rank of M. The SVD is not unique. It is always possible to choose the decomposition so that the singular values $\Sigma _{ii}$ are in descending order. In this case, $\mathbf {\Sigma } $ (but not U and V) is uniquely determined by M. The term sometimes refers to the compact SVD, a similar decomposition $\ \mathbf {M} =\mathbf {U\Sigma V^{*}} \ $ in which $\ \mathbf {\Sigma } \ $ is square diagonal of size $r\times r$, where $\ r\leq \min\{m,n\}\ $ is the rank of M, and has only the non-zero singular values. In this variant, U is an $m\times r$ semi-unitary matrix and $\ \mathbf {V} \ $ is an $n\times r$ semi-unitary matrix, such that $\ \mathbf {U^{*}U} =\mathbf {V^{*}V} =\mathbf {I} _{r}\ .$ Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in all areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control. Intuitive interpretations Rotation, coordinate scaling, and reflection In the special case when M is an m × m real square matrix, the matrices U and V⁎ can be chosen to be real m × m matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as A, as a linear transformation x ↦ Ax of the space Rm, the matrices U and V⁎ represent rotations or reflection of the space, while $\mathbf {\Sigma } $ represents the scaling of each coordinate xi by the factor σi. Thus the SVD decomposition breaks down any linear transformation of Rm into a composition of three geometrical transformations: a rotation or reflection (V⁎), followed by a coordinate-by-coordinate scaling ($\mathbf {\Sigma } $), followed by another rotation or reflection (U). In particular, if M has a positive determinant, then U and V⁎ can be chosen to be both rotations with reflections, or both rotations without reflections. If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type. If the matrix M is real but not square, namely m×n with m ≠ n, it can be interpreted as a linear transformation from Rn to Rm. Then U and V⁎ can be chosen to be rotations/reflections of Rm and Rn, respectively; and $\mathbf {\Sigma } $, besides scaling the first $\min\{m,n\}$ coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn Rn into Rm. Singular values as semiaxes of an ellipse or ellipsoid As shown in the figure, the singular values can be interpreted as the magnitude of the semiaxes of an ellipse in 2D. This concept can be generalized to n-dimensional Euclidean space, with the singular values of any n × n square matrix being viewed as the magnitude of the semiaxis of an n-dimensional ellipsoid. Similarly, the singular values of any m × n matrix can be viewed as the magnitude of the semiaxis of an n-dimensional ellipsoid in m-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. See below for further details. The columns of U and V are orthonormal bases Since U and V⁎ are unitary, the columns of each of them form a set of orthonormal vectors, which can be regarded as basis vectors. The matrix M maps the basis vector Vi to the stretched unit vector σi Ui. By the definition of a unitary matrix, the same is true for their conjugate transposes U⁎ and V, except the geometric interpretation of the singular values as stretches is lost. In short, the columns of U, U⁎, V, and V⁎ are orthonormal bases. When $\mathbf {M} $ is a positive-semidefinite Hermitian matrix, U and V are both equal to the unitary matrix used to diagonalize $\mathbf {M} $. However, when $\mathbf {M} $ is not positive-semidefinite and Hermitian but still diagonalizable, its eigendecomposition and singular value decomposition are distinct. Geometric meaning Because U and V are unitary, we know that the columns U1, ..., Um of U yield an orthonormal basis of Km and the columns V1, ..., Vn of V yield an orthonormal basis of Kn (with respect to the standard scalar products on these spaces). The linear transformation $T\colon \left\{{\begin{aligned}K^{n}&\to K^{m}\\x&\mapsto \mathbf {M} x\end{aligned}}\right.$ has a particularly simple description with respect to these orthonormal bases: we have $T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),$ where σi is the i-th diagonal entry of $\mathbf {\Sigma } $, and T(Vi) = 0 for i > min(m,n). The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : Kn → Km one can find orthonormal bases of Kn and Km such that T maps the i-th basis vector of Kn to a non-negative multiple of the i-th basis vector of Km, and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere S of radius one in Rn. The linear map T maps this sphere onto an ellipsoid in Rm. Non-zero singular values are simply the lengths of the semi-axes of this ellipsoid. Especially when n = m, and all the singular values are distinct and non-zero, the SVD of the linear map T can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid T(S) and specifically its axes; then consider the directions in Rn sent by T onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry V⁎ sending these directions to the coordinate axes of Rn. On a second move, apply an endomorphism D diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of T(S) as stretching coefficients. The composition D ∘ V⁎ then sends the unit-sphere onto an ellipsoid isometric to T(S). To define the third and last move, apply an isometry U to this ellipsoid to obtain T(S). As can be easily checked, the composition U ∘ D ∘ V⁎ coincides with T. Example Consider the 4 × 5 matrix $\mathbf {M} ={\begin{bmatrix}1&0&0&0&2\\0&0&3&0&0\\0&0&0&0&0\\0&2&0&0&0\end{bmatrix}}$ A singular value decomposition of this matrix is given by UΣV⁎ ${\begin{aligned}\mathbf {U} &={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0&\color {Emerald}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0&\color {Emerald}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0&\color {Emerald}-1\\\color {Green}0&\color {Blue}0&\color {Cyan}-1&\color {Emerald}0\end{bmatrix}}\\[6pt]{\boldsymbol {\Sigma }}&={\begin{bmatrix}3&0&0&0&\color {Gray}{\mathit {0}}\\0&{\sqrt {5}}&0&0&\color {Gray}{\mathit {0}}\\0&0&2&0&\color {Gray}{\mathit {0}}\\0&0&0&\color {Red}\mathbf {0} &\color {Gray}{\mathit {0}}\end{bmatrix}}\\[6pt]\mathbf {V} ^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}0&\color {Orchid}0&\color {Orchid}0&\color {Orchid}1&\color {Orchid}0\\\color {Purple}-{\sqrt {0.8}}&\color {Purple}0&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.2}}\end{bmatrix}}\end{aligned}}$ The scaling matrix $\mathbf {\Sigma } $ is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices U and V⁎ are unitary, multiplying by their respective conjugate transposes yields identity matrices, as shown below. In this case, because U and V⁎ are real valued, each is an orthogonal matrix. ${\begin{aligned}\mathbf {U} \mathbf {U} ^{*}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}=\mathbf {I} _{4}\\[6pt]\mathbf {V} \mathbf {V} ^{*}&={\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}}=\mathbf {I} _{5}\end{aligned}}$ This particular singular value decomposition is not unique. Choosing $\mathbf {V} $ such that $\mathbf {V} ^{*}={\begin{bmatrix}\color {Violet}0&\color {Violet}1&\color {Violet}0&\color {Violet}0&\color {Violet}0\\\color {Plum}0&\color {Plum}0&\color {Plum}1&\color {Plum}0&\color {Plum}0\\\color {Magenta}{\sqrt {0.2}}&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0&\color {Magenta}{\sqrt {0.8}}\\\color {Orchid}{\sqrt {0.4}}&\color {Orchid}0&\color {Orchid}0&\color {Orchid}{\sqrt {0.5}}&\color {Orchid}-{\sqrt {0.1}}\\\color {Purple}-{\sqrt {0.4}}&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.5}}&\color {Purple}{\sqrt {0.1}}\end{bmatrix}}$ is also a valid singular value decomposition. SVD and spectral decomposition Singular values, singular vectors, and their relation to the SVD A non-negative real number σ is a singular value for M if and only if there exist unit-length vectors $\mathbf {u} $ in Km and $\mathbf {v} $ in Kn such that $\mathbf {Mv} =\sigma \mathbf {u} \,{\text{ and }}\mathbf {M} ^{*}\mathbf {u} =\sigma \mathbf {v} .$ The vectors $\mathbf {u} $ and $\mathbf {v} $ are called left-singular and right-singular vectors for σ, respectively. In any singular value decomposition $\mathbf {M} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}$ the diagonal entries of $\mathbf {\Sigma } $ are equal to the singular values of M. The first p = min(m, n) columns of U and V are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that: • An m × n matrix M has at most p distinct singular values. • It is always possible to find a unitary basis U for Km with a subset of basis vectors spanning the left-singular vectors of each singular value of M. • It is always possible to find a unitary basis V for Kn with a subset of basis vectors spanning the right-singular vectors of each singular value of M. A singular value for which we can find two left (or right) singular vectors that are linearly independent is called degenerate. If $\mathbf {u} _{1}$ and $\mathbf {u} _{2}$ are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of U and V corresponding to diagonal elements of $\mathbf {\Sigma } $ all with the same value σ. As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in the cokernel and kernel, respectively, of M, which by the rank–nullity theorem cannot be the same dimension if m ≠ n. Even if all singular values are nonzero, if m > n then the cokernel is nontrivial, in which case U is padded with m − n orthogonal vectors from the cokernel. Conversely, if m < n, then V is padded by n − m orthogonal vectors from the kernel. However, if the singular value of 0 exists, the extra columns of U or V already appear as left or right-singular vectors. Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor eiφ (for the real case up to a sign). Consequently, if all singular values of a square matrix M are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of U by a unit-phase factor and simultaneous multiplication of the corresponding column of V by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both U and V spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of U and V spanning the kernel and cokernel, respectively, of M. Relation to eigenvalue decomposition The singular value decomposition is very general in the sense that it can be applied to any m × n matrix, whereas eigenvalue decomposition can only be applied to squared diagonalizable matrices. Nevertheless, the two decompositions are related. If M has SVD $\mathbf {M} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}$, the following two relations hold: ${\begin{aligned}\mathbf {M} ^{*}\mathbf {M} &=\mathbf {V} {\boldsymbol {\Sigma }}^{*}\mathbf {U} ^{*}\,\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}=\mathbf {V} ({\boldsymbol {\Sigma }}^{*}{\boldsymbol {\Sigma }})\mathbf {V} ^{*}\\\mathbf {M} \mathbf {M} ^{*}&=\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}\,\mathbf {V} {\boldsymbol {\Sigma }}^{*}\mathbf {U} ^{*}=\mathbf {U} ({\boldsymbol {\Sigma }}{\boldsymbol {\Sigma }}^{*})\mathbf {U} ^{*}\end{aligned}}$ The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently: • The columns of V (referred to as right-singular vectors) are eigenvectors of M⁎M. • The columns of U (referred to as left-singular vectors) are eigenvectors of MM⁎. • The non-zero elements of $\mathbf {\Sigma } $ (non-zero singular values) are the square roots of the non-zero eigenvalues of M⁎M or MM⁎. In the special case of M being a normal matrix, and thus also squared, the spectral theorem ensures that it can be unitarily diagonalized using a basis of eigenvectors, and thus decomposed as $\mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}$ for some unitary matrix U and diagonal matrix D with complex elements σi along the diagonal. When M is positive semi-definite, σi will be non-negative real numbers so that the decomposition M = UDU⁎ is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase eiφ of each σi to either its corresponding Vi or Ui. The natural connection of the SVD to non-normal matrices is through the polar decomposition theorem: M = SR, where S = UΣU⁎ is positive semidefinite and normal, and R = UV⁎ is unitary. Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of M, while related, differ: the eigenvalue decomposition is M = UDU−1, where U is not necessarily unitary and D is not necessarily positive semi-definite, while the SVD is M = UΣV⁎, where $\mathbf {\Sigma } $ is diagonal and positive semi-definite, and U and V are unitary matrices that are not necessarily related except through the matrix M. While only non-defective square matrices have an eigenvalue decomposition, any $m\times n$ matrix has a SVD. Applications of the SVD Pseudoinverse The singular value decomposition can be used for computing the pseudoinverse of a matrix. (Various authors use different notation for the pseudoinverse; here we use †.) Indeed, the pseudoinverse of the matrix M with singular value decomposition M = UΣV⁎ is M† = V Σ† U⁎ where Σ† is the pseudoinverse of Σ, which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix. The pseudoinverse is one way to solve linear least squares problems. Solving homogeneous linear equations A set of homogeneous linear equations can be written as Ax = 0 for a matrix A and vector x. A typical situation is that A is known and a non-zero x is to be determined which satisfies the equation. Such an x belongs to A's null space and is sometimes called a (right) null vector of A. The vector x can be characterized as a right-singular vector corresponding to a singular value of A that is zero. This observation means that if A is a square matrix and has no vanishing singular value, the equation has no non-zero x as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero x satisfying x⁎A = 0, with x⁎ denoting the conjugate transpose of x, is called a left null vector of A. Total least squares minimization A total least squares problem seeks the vector x that minimizes the 2-norm of a vector Ax under the constraint ||x|| = 1. The solution turns out to be the right-singular vector of A corresponding to the smallest singular value. Range, null space and rank Another application of the SVD is that it provides an explicit representation of the range and null space of a matrix M. The right-singular vectors corresponding to vanishing singular values of M span the null space of M and the left-singular vectors corresponding to the non-zero singular values of M span the range of M. For example, in the above example the null space is spanned by the last two rows of V⁎ and the range is spanned by the first three columns of U. As a consequence, the rank of M equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in $\mathbf {\Sigma } $. In numerical linear algebra the singular values can be used to determine the effective rank of a matrix, as rounding error may lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. Low-rank matrix approximation Some practical applications need to solve the problem of approximating a matrix M with another matrix ${\tilde {\mathbf {M} }}$, said to be truncated, which has a specific rank r. In the case that the approximation is based on minimizing the Frobenius norm of the difference between M and ${\tilde {\mathbf {M} }}$ under the constraint that $\operatorname {rank} \left({\tilde {\mathbf {M} }}\right)=r$, it turns out that the solution is given by the SVD of M, namely ${\tilde {\mathbf {M} }}=\mathbf {U} {\tilde {\boldsymbol {\Sigma }}}\mathbf {V} ^{*},$ where ${\tilde {\boldsymbol {\Sigma }}}$ is the same matrix as $\mathbf {\Sigma } $ except that it contains only the r largest singular values (the other singular values are replaced by zero). This is known as the Eckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see Stewart 1993). Separable models The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix A can be written as an outer product of two vectors A = u ⊗ v, or, in coordinates, $A_{ij}=u_{i}v_{j}$. Specifically, the matrix M can be decomposed as $\mathbf {M} =\sum _{i}\mathbf {A} _{i}=\sum _{i}\sigma _{i}\mathbf {U} _{i}\otimes \mathbf {V} _{i}.$ Here Ui and Vi are the i-th columns of the corresponding SVD matrices, σi are the ordered singular values, and each Ai is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero σi is exactly the rank of the matrix. Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described[1] by a Gabor filter in the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example, reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of U in the SVD factorization is then a Gabor while the first column of V represents the time modulation (or vice versa). One may then define an index of separability $\alpha ={\frac {\sigma _{1}^{2}}{\sum _{i}\sigma _{i}^{2}}},$ which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.[2] Nearest orthogonal matrix It is possible to use the SVD of a square matrix A to determine the orthogonal matrix O closest to A. The closeness of fit is measured by the Frobenius norm of O − A. The solution is the product UV⁎.[3] This intuitively makes sense because an orthogonal matrix would have the decomposition UIV⁎ where I is the identity matrix, so that if A = UΣV⁎ then the product A = UV⁎ amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix R = UV⁎ of the Polar Decomposition M = RP = P'R in either order of stretch and rotation, as described above. A similar problem, with interesting applications in shape analysis, is the orthogonal Procrustes problem, which consists of finding an orthogonal matrix O which most closely maps A to B. Specifically, $\mathbf {O} ={\underset {\Omega }{\operatorname {argmin} }}\|\mathbf {A} {\boldsymbol {\Omega }}-\mathbf {B} \|_{F}\quad {\text{subject to}}\quad {\boldsymbol {\Omega }}^{\textsf {T}}{\boldsymbol {\Omega }}=\mathbf {I} ,$ where $\|\cdot \|_{F}$ denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrix M = ATB. The Kabsch algorithm The Kabsch algorithm (called Wahba's problem in other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. Signal processing The SVD and pseudoinverse have been successfully applied to signal processing,[4] image processing[5] and big data (e.g., in genomic signal processing).[6][7][8][9] Astrodynamics In Astrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design[10] and Orbital station-keeping.[11] Other examples The SVD is also applied extensively to the study of linear inverse problems and is useful in the analysis of regularization methods such as that of Tikhonov. It is widely used in statistics, where it is related to principal component analysis and to correspondence analysis, and in signal processing and pattern recognition. It is also used in output-only modal analysis, where the non-scaled mode shapes can be determined from the singular vectors. Yet another usage is latent semantic indexing in natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number" $\kappa :=\sigma _{\text{max}}/\sigma _{\text{min}}$ :=\sigma _{\text{max}}/\sigma _{\text{min}}} . It often controls the error rate or convergence rate of a given computational scheme on such systems.[12][13] The SVD also plays a crucial role in the field of quantum information, in a form often referred to as the Schmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to be entangled: if the rank of the $\mathbf {\Sigma } $ matrix is larger than one. One application of SVD to rather large matrices is in numerical weather prediction, where Lanczos methods are used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate an ensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled with radial basis functions to interpolate solutions to three-dimensional unsteady flow problems.[14] Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO.[15] SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used in recommender systems to predict people's item ratings.[16] Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.[17] Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to disease outbreak detection.[18] A combination of SVD and higher-order SVD also has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance.[19] Proof of existence An eigenvalue λ of a matrix M is characterized by the algebraic relation Mu = λu. When M is Hermitian, a variational characterization is also available. Let M be a real n × n symmetric matrix. Define ${\begin{cases}f:\mathbb {R} ^{n}\to \mathbb {R} \\f:\mathbf {x} \mapsto \mathbf {x} ^{\textsf {T}}\mathbf {M} \mathbf {x} \end{cases}}$ By the extreme value theorem, this continuous function attains a maximum at some u when restricted to the unit sphere {||x|| = 1}. By the Lagrange multipliers theorem, u necessarily satisfies $\nabla \mathbf {u} ^{\textsf {T}}\mathbf {M} \mathbf {u} -\lambda \cdot \nabla \mathbf {u} ^{\textsf {T}}\mathbf {u} =0$ for some real number λ. The nabla symbol, ∇, is the del operator (differentiation with respect to x). Using the symmetry of M we obtain $\nabla \mathbf {x} ^{\textsf {T}}\mathbf {M} \mathbf {x} -\lambda \cdot \nabla \mathbf {x} ^{\textsf {T}}\mathbf {x} =2(\mathbf {M} -\lambda \mathbf {I} )\mathbf {x} .$ Therefore Mu = λu, so u is a unit length eigenvector of M. For every unit length eigenvector v of M its eigenvalue is f(v), so λ is the largest eigenvalue of M. The same calculation performed on the orthogonal complement of u gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there f(x) = x* M x is a real-valued function of 2n real variables. Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of M is no longer required. This section gives these two arguments for existence of singular value decomposition. Based on the spectral theorem Let $\mathbf {M} $ be an m × n complex matrix. Since $\mathbf {M} ^{*}\mathbf {M} $ is positive semi-definite and Hermitian, by the spectral theorem, there exists an n × n unitary matrix $\mathbf {V} $ such that $\mathbf {V} ^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} ={\bar {\mathbf {D} }}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}},$ where $\mathbf {D} $ is diagonal and positive definite, of dimension $\ell \times \ell $, with $\ell $ the number of non-zero eigenvalues of $\mathbf {M} ^{*}\mathbf {M} $ (which can be shown to verify $\ell \leq \min(n,m)$). Note that $\mathbf {V} $ is here by definition a matrix whose $i$-th column is the $i$-th eigenvector of $\mathbf {M} ^{*}\mathbf {M} $, corresponding to the eigenvalue ${\bar {\mathbf {D} }}_{ii}$. Moreover, the $j$-th column of $\mathbf {V} $, for $j>\ell $, is an eigenvector of $\mathbf {M} ^{*}\mathbf {M} $ with eigenvalue ${\bar {\mathbf {D} }}_{jj}=0$. This can be expressed by writing $\mathbf {V} $ as $\mathbf {V} ={\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}$, where the columns of $\mathbf {V} _{1}$ and $\mathbf {V} _{2}$ therefore contain the eigenvectors of $\mathbf {M} ^{*}\mathbf {M} $ corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting of $\mathbf {V} $, the equation becomes: ${\begin{bmatrix}\mathbf {V} _{1}^{*}\\\mathbf {V} _{2}^{*}\end{bmatrix}}\mathbf {M} ^{*}\mathbf {M} {\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\\\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}}.$ This implies that $\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}=\mathbf {D} ,\quad \mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .$ Moreover, the second equation implies $\mathbf {M} \mathbf {V} _{2}=\mathbf {0} $.[20] Finally, the unitary-ness of $\mathbf {V} $ translates, in terms of $\mathbf {V} _{1}$ and $\mathbf {V} _{2}$, into the following conditions: ${\begin{aligned}\mathbf {V} _{1}^{*}\mathbf {V} _{1}&=\mathbf {I} _{1},\\\mathbf {V} _{2}^{*}\mathbf {V} _{2}&=\mathbf {I} _{2},\\\mathbf {V} _{1}\mathbf {V} _{1}^{*}+\mathbf {V} _{2}\mathbf {V} _{2}^{*}&=\mathbf {I} _{12},\end{aligned}}$ where the subscripts on the identity matrices are used to remark that they are of different dimensions. Let us now define $\mathbf {U} _{1}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}.$ Then, $\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} (\mathbf {I} -\mathbf {V} _{2}\mathbf {V} _{2}^{*})=\mathbf {M} -(\mathbf {M} \mathbf {V} _{2})\mathbf {V} _{2}^{*}=\mathbf {M} ,$ since $\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .$ This can be also seen as immediate consequence of the fact that $\mathbf {M} \mathbf {V} _{1}\mathbf {V} _{1}^{*}=\mathbf {M} $. This is equivalent to the observation that if $\{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }$ is the set of eigenvectors of $\mathbf {M} ^{*}\mathbf {M} $ corresponding to non-vanishing eigenvalues $\{\lambda _{i}\}_{i=1}^{\ell }$, then $\{\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }$ is a set of orthogonal vectors, and $\{\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }$ is a (generally not complete) set of orthonormal vectors. This matches with the matrix formalism used above denoting with $\mathbf {V} _{1}$ the matrix whose columns are $\{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }$, with $\mathbf {V} _{2}$ the matrix whose columns are the eigenvectors of $\mathbf {M} ^{*}\mathbf {M} $ with vanishing eigenvalue, and $\mathbf {U} _{1}$ the matrix whose columns are the vectors $\{\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }$. We see that this is almost the desired result, except that $\mathbf {U} _{1}$ and $\mathbf {V} _{1}$ are in general not unitary, since they might not be square. However, we do know that the number of rows of $\mathbf {U} _{1}$ is no smaller than the number of columns, since the dimensions of $\mathbf {D} $ is no greater than $m$ and $n$. Also, since $\mathbf {U} _{1}^{*}\mathbf {U} _{1}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} \mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {I_{1}} ,$ the columns in $\mathbf {U} _{1}$ are orthonormal and can be extended to an orthonormal basis. This means that we can choose $\mathbf {U} _{2}$ such that $\mathbf {U} ={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}$ is unitary. For V1 we already have V2 to make it unitary. Now, define ${\boldsymbol {\Sigma }}={\begin{bmatrix}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}},$ where extra zero rows are added or removed to make the number of zero rows equal the number of columns of U2, and hence the overall dimensions of ${\boldsymbol {\Sigma }}$ equal to $m\times n$. Then ${\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}{\begin{bmatrix}\mathbf {} D^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}}{\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}^{*}={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}\\0\end{bmatrix}}=\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} ,$ which is the desired result: $\mathbf {M} =\mathbf {U} {\boldsymbol {\Sigma }}\mathbf {V} ^{*}.$ Notice the argument could begin with diagonalizing MM⁎ rather than M⁎M (This shows directly that MM⁎ and M⁎M have the same non-zero eigenvalues). Based on variational characterization The singular values can also be characterized as the maxima of uTMv, considered as a function of u and v, over particular subspaces. The singular vectors are the values of u and v where these maxima are attained. Let M denote an m × n matrix with real entries. Let Sk−1 be the unit $(k-1)$-sphere in $\mathbb {R} ^{k}$, and define $\sigma (\mathbf {u} ,\mathbf {v} )=\mathbf {u} ^{\textsf {T}}\mathbf {M} \mathbf {v} ,\ \mathbf {u} \in S^{m-1},\mathbf {v} \in S^{n-1}.$ Consider the function σ restricted to Sm−1 × Sn−1. Since both Sm−1 and Sn−1 are compact sets, their product is also compact. Furthermore, since σ is continuous, it attains a largest value for at least one pair of vectors u ∈ Sm−1 and v ∈ Sn−1. This largest value is denoted σ1 and the corresponding vectors are denoted u1 and v1. Since σ1 is the largest value of σ(u, v) it must be non-negative. If it were negative, changing the sign of either u1 or v1 would make it positive and therefore larger. Statement. u1, v1 are left and right-singular vectors of M with corresponding singular value σ1. Proof. Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation: $\nabla \sigma =\nabla \mathbf {u} ^{\textsf {T}}\mathbf {M} \mathbf {v} -\lambda _{1}\cdot \nabla \mathbf {u} ^{\textsf {T}}\mathbf {u} -\lambda _{2}\cdot \nabla \mathbf {v} ^{\textsf {T}}\mathbf {v} $ After some algebra, this becomes ${\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=2\lambda _{1}\mathbf {u} _{1}+0\\\mathbf {M} ^{\textsf {T}}\mathbf {u} _{1}&=0+2\lambda _{2}\mathbf {v} _{1}\end{aligned}}$ Multiplying the first equation from left by $\mathbf {u} _{1}^{\textsf {T}}$ and the second equation from left by $\mathbf {v} _{1}^{\textsf {T}}$ and taking ||u|| = ||v|| = 1 into account gives $\sigma _{1}=2\lambda _{1}=2\lambda _{2}.$ Plugging this into the pair of equations above, we have ${\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=\sigma _{1}\mathbf {u} _{1}\\\mathbf {M} ^{\textsf {T}}\mathbf {u} _{1}&=\sigma _{1}\mathbf {v} _{1}\end{aligned}}$ This proves the statement. More singular vectors and singular values can be found by maximizing σ(u, v) over normalized u, v which are orthogonal to u1 and v1, respectively. The passage from real to complex is similar to the eigenvalue case. Calculating the SVD The singular value decomposition can be computed using the following observations: • The left-singular vectors of M are a set of orthonormal eigenvectors of MM⁎. • The right-singular vectors of M are a set of orthonormal eigenvectors of M⁎M. • The non-zero singular values of M (found on the diagonal entries of $\mathbf {\Sigma } $) are the square roots of the non-zero eigenvalues of both M⁎M and MM⁎. Numerical approach The SVD of a matrix M is typically computed by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes O(mn2) floating-point operations (flop), assuming that m ≥ n. The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with an iterative method (as with eigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like the machine epsilon. If this precision is considered constant, then the second step takes O(n) iterations, each costing O(n) flops. Thus, the first step is more expensive, and the overall cost is O(mn2) flops (Trefethen & Bau III 1997, Lecture 31). The first step can be done using Householder reflections for a cost of 4mn2 − 4n3/3 flops, assuming that only the singular values are needed and not the singular vectors. If m is much larger than n then it is advantageous to first reduce the matrix M to a triangular matrix with the QR decomposition and then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is 2mn2 + 2n3 flops (Trefethen & Bau III 1997, Lecture 31). The second step can be done by a variant of the QR algorithm for the computation of eigenvalues, which was first described by Golub & Kahan (1965). The LAPACK subroutine DBDSQR[21] implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD[22] routine for the computation of the singular value decomposition. The same algorithm is implemented in the GNU Scientific Library (GSL). The GSL also offers an alternative method that uses a one-sided Jacobi orthogonalization in step 2 (GSL Team 2007). This method computes the SVD of the bidiagonal matrix by solving a sequence of 2 × 2 SVD problems, similar to how the Jacobi eigenvalue algorithm solves a sequence of 2 × 2 eigenvalue methods (Golub & Van Loan 1996, §8.6.3). Yet another method for step 2 uses the idea of divide-and-conquer eigenvalue algorithms (Trefethen & Bau III 1997, Lecture 31). There is an alternative way that does not explicitly use the eigenvalue decomposition.[23] Usually the singular value problem of a matrix M is converted into an equivalent symmetric eigenvalue problem such as M M⁎, M⁎M, or ${\begin{bmatrix}\mathbf {O} &\mathbf {M} \\\mathbf {M} ^{*}&\mathbf {O} \end{bmatrix}}.$ The approaches that use eigenvalue decompositions are based on the QR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between the QR decomposition and the LQ decomposition to find the real diagonal Hermitian matrices. The QR decomposition gives M ⇒ Q R and the LQ decomposition of R gives R ⇒ L P⁎. Thus, at every iteration, we have M ⇒ Q L P⁎, update M ⇐ L and repeat the orthogonalizations. Eventually, this iteration between QR decomposition and LQ decomposition produces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD. Analytic result of 2 × 2 SVD The singular values of a 2 × 2 matrix can be found analytically. Let the matrix be $\mathbf {M} =z_{0}\mathbf {I} +z_{1}\sigma _{1}+z_{2}\sigma _{2}+z_{3}\sigma _{3}$ where $z_{i}\in \mathbb {C} $ are complex numbers that parameterize the matrix, I is the identity matrix, and $\sigma _{i}$ denote the Pauli matrices. Then its two singular values are given by ${\begin{aligned}\sigma _{\pm }&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm {\sqrt {(|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2})^{2}-|z_{0}^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}|^{2}}}}}\\&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm 2{\sqrt {(\operatorname {Re} z_{0}z_{1}^{*})^{2}+(\operatorname {Re} z_{0}z_{2}^{*})^{2}+(\operatorname {Re} z_{0}z_{3}^{*})^{2}+(\operatorname {Im} z_{1}z_{2}^{*})^{2}+(\operatorname {Im} z_{2}z_{3}^{*})^{2}+(\operatorname {Im} z_{3}z_{1}^{*})^{2}}}}}\end{aligned}}$ Reduced SVDs In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an m×n matrix M of rank r: Thin SVD The thin, or economy-sized, SVD of a matrix M is given by[24] $\mathbf {M} =\mathbf {U} _{k}{\boldsymbol {\Sigma }}_{k}\mathbf {V} _{k}^{*},$ where $k=\operatorname {min} (m,n)$, the matrices Uk and Vk contain only the first k columns of U and V, and Σk contains only the first k singular values from Σ. The matrix Uk is thus m×k, Σk is k×k diagonal, and Vk* is k×n. The thin SVD uses significantly less space and computation time if k ≪ max(m, n). The first stage in its calculation will usually be a QR decomposition of M, which can make for a significantly quicker calculation in this case. Compact SVD $\mathbf {M} =\mathbf {U} _{r}{\boldsymbol {\Sigma }}_{r}\mathbf {V} _{r}^{*}$ Only the r column vectors of U and r row vectors of V* corresponding to the non-zero singular values Σr are calculated. The remaining vectors of U and V* are not calculated. This is quicker and more economical than the thin SVD if r ≪ min(m, n). The matrix Ur is thus m×r, Σr is r×r diagonal, and Vr* is r×n. Truncated SVD In many applications the number r of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only t ≪ r non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix M, but rather provides the optimal low-rank matrix approximation ${\tilde {\mathbf {M} }}$ by any matrix of a fixed rank t ${\tilde {\mathbf {M} }}=\mathbf {U} _{t}{\boldsymbol {\Sigma }}_{t}\mathbf {V} _{t}^{*}$, where matrix Ut is m×t, Σt is t×t diagonal, and Vt* is t×n. Only the t column vectors of U and t row vectors of V* corresponding to the t largest singular values Σt are calculated. This can be much quicker and more economical than the compact SVD if t≪r, but requires a completely different toolset of numerical solvers. In applications that require an approximation to the Moore–Penrose inverse of the matrix M, the smallest singular values of M are of interest, which are more challenging to compute compared to the largest ones. Truncated SVD is employed in latent semantic indexing.[25] Norms Ky Fan norms The sum of the k largest singular values of M is a matrix norm, the Ky Fan k-norm of M.[26] The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of M as a linear operator with respect to the Euclidean norms of Km and Kn. In other words, the Ky Fan 1-norm is the operator norm induced by the standard ℓ2 Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator M on (possibly infinite-dimensional) Hilbert spaces $\|\mathbf {M} \|=\|\mathbf {M} ^{*}\mathbf {M} \|^{\frac {1}{2}}$ But, in the matrix case, (M* M)1/2 is a normal matrix, so ||M* M||1/2 is the largest eigenvalue of (M* M)1/2, i.e. the largest singular value of M. The last of the Ky Fan norms, the sum of all singular values, is the trace norm (also known as the 'nuclear norm'), defined by ||M|| = Tr[(M* M)1/2] (the eigenvalues of M* M are the squares of the singular values). Hilbert–Schmidt norm The singular values are related to another norm on the space of operators. Consider the Hilbert–Schmidt inner product on the n × n matrices, defined by $\langle \mathbf {M} ,\mathbf {N} \rangle =\operatorname {tr} \left(\mathbf {N} ^{*}\mathbf {M} \right).$ So the induced norm is $\|\mathbf {M} \|={\sqrt {\langle \mathbf {M} ,\mathbf {M} \rangle }}={\sqrt {\operatorname {tr} \left(\mathbf {M} ^{*}\mathbf {M} \right)}}.$ Since the trace is invariant under unitary equivalence, this shows $\|\mathbf {M} \|={\sqrt {\sum _{i}\sigma _{i}^{2}}}$ where σi are the singular values of M. This is called the Frobenius norm, Schatten 2-norm, or Hilbert–Schmidt norm of M. Direct calculation shows that the Frobenius norm of M = (mij) coincides with: ${\sqrt {\sum _{ij}|m_{ij}|^{2}}}.$ In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of the Schatten norm. Variations and generalizations Scale-invariant SVD The singular values of a matrix A are uniquely defined and are invariant with respect to left and/or right unitary transformations of A. In other words, the singular values of UAV, for unitary U and V, are equal to the singular values of A. This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD,[27] is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of A. In other words, the singular values of DAE, for invertible diagonal matrices D and E, are equal to the singular values of A. This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. Bounded operators on Hilbert spaces The factorization M = UΣV⁎ can be extended to a bounded operator M on a separable Hilbert space H. Namely, for any bounded operator M, there exist a partial isometry U, a unitary V, a measure space (X, μ), and a non-negative measurable f such that $\mathbf {M} =\mathbf {U} T_{f}\mathbf {V} ^{*}$ where $T_{f}$ is the multiplication by f on L2(X, μ). This can be shown by mimicking the linear algebraic argument for the matricial case above. VTfV* is the unique positive square root of M*M, as given by the Borel functional calculus for self-adjoint operators. The reason why U need not be unitary is because, unlike the finite-dimensional case, given an isometry U1 with nontrivial kernel, a suitable U2 may not be found such that ${\begin{bmatrix}U_{1}\\U_{2}\end{bmatrix}}$ is a unitary operator. As for matrices, the singular value factorization is equivalent to the polar decomposition for operators: we can simply write $\mathbf {M} =\mathbf {U} \mathbf {V} ^{*}\cdot \mathbf {V} T_{f}\mathbf {V} ^{*}$ and notice that U V* is still a partial isometry while VTfV* is positive. Singular values and compact operators The notion of singular values and left/right-singular vectors can be extended to compact operator on Hilbert space as they have a discrete spectrum. If T is compact, every non-zero λ in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If M is compact, so is M⁎M. Applying the diagonalization result, the unitary image of its positive square root Tf  has a set of orthonormal eigenvectors {ei} corresponding to strictly positive eigenvalues {σi}. For any ψ ∈ H, $\mathbf {M} \psi =\mathbf {U} T_{f}\mathbf {V} ^{*}\psi =\sum _{i}\left\langle \mathbf {U} T_{f}\mathbf {V} ^{*}\psi ,\mathbf {U} e_{i}\right\rangle \mathbf {U} e_{i}=\sum _{i}\sigma _{i}\left\langle \psi ,\mathbf {V} e_{i}\right\rangle \mathbf {U} e_{i},$ where the series converges in the norm topology on H. Notice how this resembles the expression from the finite-dimensional case. σi are called the singular values of M. {Uei} (resp. {Vei}) can be considered the left-singular (resp. right-singular) vectors of M. Compact operators on a Hilbert space are the closure of finite-rank operators in the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: Theorem. M is compact if and only if M⁎M is compact. History The singular value decomposition was originally developed by differential geometers, who wished to determine whether a real bilinear form could be made equal to another by independent orthogonal transformations of the two spaces it acts on. Eugenio Beltrami and Camille Jordan discovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form a complete set of invariants for bilinear forms under orthogonal substitutions. James Joseph Sylvester also arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values the canonical multipliers of the matrix A. The fourth mathematician to discover the singular value decomposition independently is Autonne in 1915, who arrived at it via the polar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be by Carl Eckart and Gale J. Young in 1936;[28] they saw it as a generalization of the principal axis transformation for Hermitian matrices. In 1907, Erhard Schmidt defined an analog of singular values for integral operators (which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed by Émile Picard in 1910, who is the first to call the numbers $\sigma _{k}$ singular values (or in French, valeurs singulières). Practical methods for computing the SVD date back to Kogbetliantz in 1954–1955 and Hestenes in 1958,[29] resembling closely the Jacobi eigenvalue algorithm, which uses plane rotations or Givens rotations. However, these were replaced by the method of Gene Golub and William Kahan published in 1965,[30] which uses Householder transformations or reflections. In 1970, Golub and Christian Reinsch[31] published a variant of the Golub/Kahan algorithm that is still the one most-used today. See also • Canonical correlation • Canonical form • Correspondence analysis (CA) • Curse of dimensionality • Digital signal processing • Dimensionality reduction • Eigendecomposition of a matrix • Empirical orthogonal functions (EOFs) • Fourier analysis • Generalized singular value decomposition • Inequalities about singular values • K-SVD • Latent semantic analysis • Latent semantic indexing • Linear least squares • List of Fourier-related transforms • Locality-sensitive hashing • Low-rank approximation • Matrix decomposition • Multilinear principal component analysis (MPCA) • Nearest neighbor search • Non-linear iterative partial least squares • Polar decomposition • Principal component analysis (PCA) • Schmidt decomposition • Smith normal form • Singular value • Time series • Two-dimensional singular-value decomposition (2DSVD) • von Neumann's trace inequality • Wavelet compression Notes 1. DeAngelis, G. C.; Ohzawa, I.; Freeman, R. D. (October 1995). "Receptive-field dynamics in the central visual pathways". Trends Neurosci. 18 (10): 451–8. doi:10.1016/0166-2236(95)94496-R. PMID 8545912. S2CID 12827601. 2. Depireux, D. A.; Simon, J. Z.; Klein, D. J.; Shamma, S. A. (March 2001). "Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex". J. Neurophysiol. 85 (3): 1220–34. doi:10.1152/jn.2001.85.3.1220. PMID 11247991. 3. The Singular Value Decomposition in Symmetric (Lowdin) Orthogonalization and Data Compression 4. Sahidullah, Md.; Kinnunen, Tomi (March 2016). "Local spectral variability features for speaker verification". Digital Signal Processing. 50: 1–11. doi:10.1016/j.dsp.2015.10.011. 5. Mademlis, Ioannis; Tefas, Anastasios; Pitas, Ioannis (2018). Regularized SVD-based video frame saliency for unsupervised activity video summarization. pp. 2691–2695. doi:10.1109/ICASSP.2018.8462274. ISBN 978-1-5386-4658-8. S2CID 52286352. Retrieved 19 January 2023. {{cite book}}: |website= ignored (help) 6. O. Alter, P. O. Brown and D. Botstein (September 2000). "Singular Value Decomposition for Genome-Wide Expression Data Processing and Modeling". PNAS. 97 (18): 10101–10106. Bibcode:2000PNAS...9710101A. doi:10.1073/pnas.97.18.10101. PMC 27718. PMID 10963673. 7. O. Alter; G. H. Golub (November 2004). "Integrative Analysis of Genome-Scale Data by Using Pseudoinverse Projection Predicts Novel Correlation Between DNA Replication and RNA Transcription". PNAS. 101 (47): 16577–16582. Bibcode:2004PNAS..10116577A. doi:10.1073/pnas.0406767101. PMC 534520. PMID 15545604. 8. O. Alter; G. H. Golub (August 2006). "Singular Value Decomposition of Genome-Scale mRNA Lengths Distribution Reveals Asymmetry in RNA Gel Electrophoresis Band Broadening". PNAS. 103 (32): 11828–11833. Bibcode:2006PNAS..10311828A. doi:10.1073/pnas.0604756103. PMC 1524674. PMID 16877539. 9. Bertagnolli, N. M.; Drake, J. A.; Tennessen, J. M.; Alter, O. (November 2013). "SVD Identifies Transcript Length Distribution Functions from DNA Microarray Data and Reveals Evolutionary Forces Globally Affecting GBM Metabolism". PLOS ONE. 8 (11): e78913. Bibcode:2013PLoSO...878913B. doi:10.1371/journal.pone.0078913. PMC 3839928. PMID 24282503. Highlight. 10. Muralidharan, Vivek; Howell, Kathleen (2023). "Stretching directions in cislunar space: Applications for departures and transfer design". Astrodynamics. 7 (2): 153–178. Bibcode:2023AsDyn...7..153M. doi:10.1007/s42064-022-0147-z. S2CID 252637213. 11. Muralidharan, Vivek; Howell, Kathleen (2022). "Leveraging stretching directions for stationkeeping in Earth-Moon halo orbits". Advances in Space Research. 69 (1): 620–646. Bibcode:2022AdSpR..69..620M. doi:10.1016/j.asr.2021.10.028. S2CID 239490016. 12. Edelman, Alan (1992). "On the distribution of a scaled condition number" (PDF). Math. Comp. 58 (197): 185–190. Bibcode:1992MaCom..58..185E. doi:10.1090/S0025-5718-1992-1106966-2. 13. Shen, Jianhong (Jackie) (2001). "On the singular values of Gaussian random matrices". Linear Alg. Appl. 326 (1–3): 1–14. doi:10.1016/S0024-3795(00)00322-0. 14. Walton, S.; Hassan, O.; Morgan, K. (2013). "Reduced order modelling for unsteady fluid flow using proper orthogonal decomposition and radial basis functions". Applied Mathematical Modelling. 37 (20–21): 8930–8945. doi:10.1016/j.apm.2013.04.025. 15. Setyawati, Y.; Ohme, F.; Khan, S. (2019). "Enhancing gravitational waveform model through dynamic calibration". Physical Review D. 99 (2): 024010. arXiv:1810.07060. Bibcode:2019PhRvD..99b4010S. doi:10.1103/PhysRevD.99.024010. S2CID 118935941. 16. Sarwar, Badrul; Karypis, George; Konstan, Joseph A. & Riedl, John T. (2000). "Application of Dimensionality Reduction in Recommender System – A Case Study" (PDF). University of Minnesota. {{cite journal}}: Cite journal requires |journal= (help) 17. Bosagh Zadeh, Reza; Carlsson, Gunnar (2013). "Dimension Independent Matrix Square Using MapReduce" (PDF). arXiv:1304.1467. Bibcode:2013arXiv1304.1467B. {{cite journal}}: Cite journal requires |journal= (help) 18. Hadi Fanaee Tork; João Gama (September 2014). "Eigenspace method for spatiotemporal hotspot detection". Expert Systems. 32 (3): 454–464. arXiv:1406.3506. Bibcode:2014arXiv1406.3506F. doi:10.1111/exsy.12088. S2CID 15476557. 19. Hadi Fanaee Tork; João Gama (May 2015). "EigenEvent: An Algorithm for Event Detection from Complex Data Streams in Syndromic Surveillance". Intelligent Data Analysis. 19 (3): 597–616. arXiv:1406.3496. doi:10.3233/IDA-150734. S2CID 17966555. 20. To see this, we just have to notice that $\operatorname {Tr} (\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2})=\|\mathbf {M} \mathbf {V} _{2}\|^{2}$, and remember that $\|A\|=0\Leftrightarrow A=0$. 21. Netlib.org 22. Netlib.org 23. mathworks.co.kr/matlabcentral/fileexchange/12674-simple-svd 24. Demmel, James (2000). "Decompositions". Templates for the Solution of Algebraic Eigenvalue Problems. By Bai, Zhaojun; Demmel, James; Dongarra, Jack J.; Ruhe, Axel; van der Vorst, Henk A. Society for Industrial and Applied Mathematics. doi:10.1137/1.9780898719581. ISBN 978-0-89871-471-5. 25. Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. hdl:11311/959408. PMID 26357324. S2CID 14714823. 26. Fan, Ky. (1951). "Maximum properties and inequalities for the eigenvalues of completely continuous operators". Proceedings of the National Academy of Sciences of the United States of America. 37 (11): 760–766. Bibcode:1951PNAS...37..760F. doi:10.1073/pnas.37.11.760. PMC 1063464. PMID 16578416. 27. Uhlmann, Jeffrey (2018), A Generalized Matrix Inverse that is Consistent with Respect to Diagonal Transformations (PDF), SIAM Journal on Matrix Analysis, vol. 239, pp. 781–800 28. Eckart, C.; Young, G. (1936). "The approximation of one matrix by another of lower rank". Psychometrika. 1 (3): 211–8. doi:10.1007/BF02288367. S2CID 10163399. 29. Hestenes, M. R. (1958). "Inversion of Matrices by Biorthogonalization and Related Results". Journal of the Society for Industrial and Applied Mathematics. 6 (1): 51–90. doi:10.1137/0106005. JSTOR 2098862. MR 0092215. 30. (Golub & Kahan 1965) 31. Golub, G. H.; Reinsch, C. (1970). "Singular value decomposition and least squares solutions". Numerische Mathematik. 14 (5): 403–420. doi:10.1007/BF02163027. MR 1553974. S2CID 123532178. References • Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 • Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. hdl:11311/959408. PMID 26357324. S2CID 14714823. • Trefethen, Lloyd N.; Bau III, David (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-361-9. • Demmel, James; Kahan, William (1990). "Accurate singular values of bidiagonal matrices". SIAM Journal on Scientific and Statistical Computing. 11 (5): 873–912. CiteSeerX 10.1.1.48.3740. doi:10.1137/0911052. • Golub, Gene H.; Kahan, William (1965). "Calculating the singular values and pseudo-inverse of a matrix". Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis. 2 (2): 205–224. Bibcode:1965SJNA....2..205G. doi:10.1137/0702016. JSTOR 2949777. • Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Johns Hopkins. ISBN 978-0-8018-5414-9. • GSL Team (2007). "§14.4 Singular Value Decomposition". GNU Scientific Library. Reference Manual. • Halldor, Bjornsson and Venegas, Silvia A. (1997). "A manual for EOF and SVD analyses of climate data". McGill University, CCGCR Report No. 97-1, Montréal, Québec, 52pp. • Hansen, P. C. (1987). "The truncated SVD as a method for regularization". BIT. 27 (4): 534–553. doi:10.1007/BF01937276. S2CID 37591557. • Horn, Roger A.; Johnson, Charles R. (1985). "Section 7.3". Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6. • Horn, Roger A.; Johnson, Charles R. (1991). "Chapter 3". Topics in Matrix Analysis. Cambridge University Press. ISBN 978-0-521-46713-1. • Samet, H. (2006). Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann. ISBN 978-0-12-369446-1. • Strang G. (1998). "Section 6.7". Introduction to Linear Algebra (3rd ed.). Wellesley-Cambridge Press. ISBN 978-0-9614088-5-5. • Stewart, G. W. (1993). "On the Early History of the Singular Value Decomposition". SIAM Review. 35 (4): 551–566. CiteSeerX 10.1.1.23.1831. doi:10.1137/1035134. hdl:1903/566. JSTOR 2132388. • Wall, Michael E.; Rechtsteiner, Andreas; Rocha, Luis M. (2003). "Singular value decomposition and principal component analysis". In D.P. Berrar; W. Dubitzky; M. Granzow (eds.). A Practical Approach to Microarray Data Analysis. Norwell, MA: Kluwer. pp. 91–109. • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 2.6", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 External links • Online SVD calculator Numerical linear algebra Key concepts • Floating point • Numerical stability Problems • System of linear equations • Matrix decompositions • Matrix multiplication (algorithms) • Matrix splitting • Sparse problems Hardware • CPU cache • TLB • Cache-oblivious algorithm • SIMD • Multiprocessing Software • MATLAB • Basic Linear Algebra Subprograms (BLAS) • LAPACK • Specialized libraries • General purpose software Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Spectral theory and *-algebras Basic concepts • Involution/*-algebra • Banach algebra • B*-algebra • C*-algebra • Noncommutative topology • Projection-valued measure • Spectrum • Spectrum of a C*-algebra • Spectral radius • Operator space Main results • Gelfand–Mazur theorem • Gelfand–Naimark theorem • Gelfand representation • Polar decomposition • Singular value decomposition • Spectral theorem • Spectral theory of normal C*-algebras Special Elements/Operators • Isospectral • Normal operator • Hermitian/Self-adjoint operator • Unitary operator • Unit Spectrum • Krein–Rutman theorem • Normal eigenvalue • Spectrum of a C*-algebra • Spectral radius • Spectral asymmetry • Spectral gap Decomposition • Decomposition of a spectrum • Continuous • Point • Residual • Approximate point • Compression • Direct integral • Discrete • Spectral abscissa Spectral Theorem • Borel functional calculus • Min-max theorem • Positive operator-valued measure • Projection-valued measure • Riesz projector • Rigged Hilbert space • Spectral theorem • Spectral theory of compact operators • Spectral theory of normal C*-algebras Special algebras • Amenable Banach algebra • With an Approximate identity • Banach function algebra • Disk algebra • Nuclear C*-algebra • Uniform algebra • Von Neumann algebra • Tomita–Takesaki theory Finite-Dimensional • Alon–Boppana bound • Bauer–Fike theorem • Numerical range • Schur–Horn theorem Generalizations • Dirac spectrum • Essential spectrum • Pseudospectrum • Structure space (Shilov boundary) Miscellaneous • Abstract index group • Banach algebra cohomology • Cohen–Hewitt factorization theorem • Extensions of symmetric operators • Fredholm theory • Limiting absorption principle • Schröder–Bernstein theorems for operator algebras • Sherman–Takeda theorem • Unbounded operator Examples • Wiener algebra Applications • Almost Mathieu operator • Corona theorem • Hearing the shape of a drum (Dirichlet eigenvalue) • Heat kernel • Kuznetsov trace formula • Lax pair • Proto-value function • Ramanujan graph • Rayleigh–Faber–Krahn inequality • Spectral geometry • Spectral method • Spectral theory of ordinary differential equations • Sturm–Liouville theory • Superstrong approximation • Transfer operator • Transform theory • Weyl law • Wiener–Khinchin theorem
Wikipedia
S. L. Woronowicz Stanisław Lech Woronowicz (born 22 July 1941, Ukmergė, Lithuania) is a Polish mathematician and physicist. He is affiliated with the University of Warsaw and is a member of the Polish Academy of Sciences. Research Woronowicz and Erling Størmer classified positive maps in low-dimensional cases,[1] which translate to the Peres-Horodecki criterion in the context of quantum information theory. He is also known for contributions to quantum groups. Awards Woronowicz was an invited speaker at International Congress of Mathematicians in Warsaw in 1983 and in Kyoto in 1990.[2] He was awarded Humboldt Research Prize in 2008,[3] and Stefan Banach Medal of the Polish Academy of Sciences in 2009.[4] References 1. Woronowicz, S.L. (October 1976). "Positive maps of low dimensional matrix algebras". Reports on Mathematical Physics. 10 (2): 165–183. doi:10.1016/0034-4877(76)90038-0. 2. ICM Plenary and Invited Speakers 3. "Prof. Dr. Stanislaw L. Woronowicz". Alexander von Humboldt Foundation. Retrieved 2021-02-16. 4. "Stefan Banach Medal". Polish Academy of Sciences. Retrieved 2021-02-16. External links • Homepage • Profile on nLab Authority control International • ISNI • VIAF National • Norway • Germany • United States • Poland Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
S and L spaces In mathematics, S-space is a regular topological space that is hereditarily separable but is not a Lindelöf space. L-space is a regular topological space that is hereditarily Lindelöf but not separable. A space is separable if it has a countable dense set and hereditarily separable if every subspace is separable. It had been believed for a long time that S-space problem and L-space problem are dual, i.e. if there is an S-space in some model of set theory then there is an L-space in the same model and vice versa – which is not true. It was shown in the early 1980s that the existence of S-space is independent of the usual axioms of ZFC. This means that to prove the existence of an S-space or to prove the non-existence of S-space, we need to assume axioms beyond those of ZFC. The L-space problem (whether an L-space can exist without assuming additional set-theoretic assumptions beyond those of ZFC) was not resolved until recently. Todorcevic proved that under PFA there are no S-spaces. This means that every regular $T_{1}$ hereditarily separable space is Lindelöf. For some time, it was believed the L-space problem would have a similar solution (that its existence would be independent of ZFC). Todorcevic showed that there is a model of set theory with Martin's axiom where there is an L-space but there are no S-spaces. Further, Todorcevic found a compact S-space from a Cohen real. In 2005, Moore solved the L-space problem by constructing an L-space without assuming additional axioms and by combining Todorcevic's rho functions with number theory. Sources • K. P. Hart, Juniti Nagata, J.E. Vaughan: Encyclopedia of General Topology, Elsevier, 2003 ISBN 0080530869, ISBN 9780080530864 • Stevo Todorcevic: "Partition problems in topology" (Chapter 2, 5, 6, and 9), Contemporary Mathematics, 1989: Volume 84 ISBN 978-0-8218-5091-6, ISBN 978-0-8218-7672-5 • Justin Tatch Moore: "A Solution to the L Space Problem", Journal of the American Mathematical Society, Volume 19, pages 717–736, 2006
Wikipedia
Gert Sabidussi Gert Sabidussi (born 28 October 1929 in Graz- 1 April 2022) is an Austrian mathematician specializing in combinatorics and graph theory. Biography Sabidussi was born in Graz, Austria. His family later moved to Innsbruck where his father was a Protestant deacon. He graduated from the University of Vienna, where he attended lectured by Felix Ehrenhaft, Nikolaus Hofreiter, Johann Radon and Hans Thirring. In 1953, he defended his doctorate on 0-1 matrices under the supervision of Edmund Hlawka and received a two-year fellowship at Princeton University. He was then an Instructor at University of Minnesota in Minneapolis, but because of the heavy teaching load moved a year later, in 1956, to Tulane University in New Orleans. He moved to Montreal in 1963, and was instrumental in bringing to Canada a number of combinatorialists and graph theorists, including Anton Kotzig, and Jaroslav Nešetřil who wrote a thesis under Sabidussi. He first worked at McMaster University and then at University of Montreal. Over the years, he had 13 graduate students. His 60th, 70th and 80th birthdays were celebrated with large Graph Theory birthday conferences. Mathematical work Sabidussi wrote foundational work on Cayley graphs, graph products and Frucht's theorem. References • Sabidussi's Biography (in German) External links • Gert Sabidussi Web Page at Université de Montréal. • Gert Sabidussi at the Mathematics Genealogy Project • Algebraic Graph Theory 2009, a Conference in celebration of Gert Sabidussi's 80th birthday. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • Belgium • United States • Netherlands Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Sabir Gusein-Zade Sabir Medgidovich Gusein-Zade (Russian: Сабир Меджидович Гусейн-Заде; born 29 July 1950 in Moscow[1]) is a Russian mathematician and a specialist in singularity theory and its applications.[2] He studied at Moscow State University, where he earned his Ph.D. in 1975 under the joint supervision of Sergei Novikov and Vladimir Arnold.[3] Before entering the university, he had earned a gold medal at the International Mathematical Olympiad.[2] Gusein-Zade co-authored with V. I. Arnold and A. N. Varchenko the textbook Singularities of Differentiable Maps (published in English by Birkhäuser).[2] A professor in both the Moscow State University and the Independent University of Moscow, Gusein-Zade also serves as co-editor-in-chief for the Moscow Mathematical Journal.[4] He shares credit with Norbert A'Campo for results on the singularities of plane curves.[5][6][7] Selected publications • S. M. Gusein-Zade. "Dynkin diagrams for singularities of functions of two variables". Functional Analysis and Its Applications, 1974, Volume 8, Issue 4, pp. 295–300. • S. M. Gusein-Zade. "Intersection matrices for certain singularities of functions of two variables". Functional Analysis and Its Applications, 1974, Volume 8, Issue 1, pp. 10–13. • A. Campillo, F. Delgado, and S. M. Gusein-Zade. "The Alexander polynomial of a plane curve singularity via the ring of functions on it". Duke Mathematical Journal, 2003, Volume 117, Number 1, pp. 125–156. • S. M. Gusein-Zade. "The problem of choice and the optimal stopping rule for a sequence of independent trials". Theory of Probability & Its Applications, 1965, Volume 11, Number 3, pp. 472–476. • S. M. Gusein-Zade. "A new technique for constructing continuous cartograms". Cartography and Geographic Information Systems, 1993, Volume 20, Issue 3, pp. 167–173. References 1. Home page of Sabir Gusein-Zade 2. Artemov, S. B.; Belavin, A. A.; Buchstaber, V. M.; Esterov, A. I.; Feigin, B. L.; Ginzburg, V. A.; Gorsky, E. A.; Ilyashenko, Yu. S.; Kirillov, A. A.; Khovanskii, A. G.; Lando, S. K.; Margulis, G. A.; Neretin, Yu. A.; Novikov, S. P.; Shlosman, S. B.; Sossinsky, A. B.; Tsfasman, M. A.; Varchenko, A. N.; Vassiliev, V. A.; Vlăduţ, S. G. (2010), "Sabir Medgidovich Gusein-Zade", Moscow Mathematical Journal, 10 (4). 3. Sabir Gusein-Zade at the Mathematics Genealogy Project 4. Editorial Board (2011), "Sabir Gusein-Zade – 60" (PDF), Anniversaries, TWMS Journal of Pure and Applied Mathematics, 2 (1): 161. 5. Wall, C. T. C. (2004), Singular Points of Plane Curves, London Mathematical Society Student Texts, vol. 63, Cambridge University Press, Cambridge, p. 152, doi:10.1017/CBO9780511617560, ISBN 978-0-521-83904-4, MR 2107253, An important result, due independently to A'Campo and Gusein-Zade, asserts that every plane curve singularity is equisingular to one defined over $\mathbb {R} $ and admitting a real morsification $f_{t}$ with only 3 critical values. 6. Brieskorn, Egbert; Knörrer, Horst (1986), Plane Algebraic Curves, Modern Birkhäuser Classics, Basel: Birkhäuser, p. vii, doi:10.1007/978-3-0348-5097-1, ISBN 978-3-0348-0492-9, MR 2975988, I would have liked to introduce the beautiful results of A'Campo and Gusein-Zade on the computation of the monodromy groups of plane curves. Translated from the German original by John Stillwell, 2012 reprint of the 1986 edition. 7. Rieger, J. H.; Ruas, M. A. S. (2005), "M-deformations of ${\mathcal {A}}$-simple $\Sigma ^{n-p+1}$-germs from $\mathbb {R} ^{n}$ to $\mathbb {R} ^{p},n\geq p$", Mathematical Proceedings of the Cambridge Philosophical Society, 139 (2): 333–349, doi:10.1017/S0305004105008625, MR 2168091, S2CID 94870364, For map-germs very little is known about the existence of M-deformations beyond the classical result by A'Campo and Gusein–Zade that plane curve-germs always have M-deformations. External links • Sabir Gusein-Zade's results at International Mathematical Olympiad Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Sachdev–Ye–Kitaev model In condensed matter physics and black hole physics, the Sachdev–Ye–Kitaev (SYK) model is an exactly solvable model initially proposed by Subir Sachdev and Jinwu Ye,[1] and later modified by Alexei Kitaev to the present commonly used form.[2][3] The model is believed to bring insights into the understanding of strongly correlated materials and it also has a close relation with the discrete model of AdS/CFT. Many condensed matter systems, such as quantum dot coupled to topological superconducting wires,[4] graphene flake with irregular boundary,[5] and kagome optical lattice with impurities,[6] are proposed to be modeled by it. Some variants of the model are amenable to digital quantum simulation,[7] with pioneering experiments implemented in a NMR setting.[8] Model Let $n$ be an integer and $m$ an even integer such that $2\leq m\leq n$, and consider a set of Majorana fermions $\psi _{1},\dotsc ,\psi _{n}$ which are fermion operators satisfying conditions: 1. Hermitian $\psi _{i}^{\dagger }=\psi _{i}$; 2. Clifford relation $\{\psi _{i},\psi _{j}\}=2\delta _{ij}$. Let $J_{i_{1}i_{2}\cdots i_{m}}$ be random variables whose expectations satisfy: 1. $\mathbf {E} (J_{i_{1}i_{2}\cdots i_{m}})=0$; 2. $\mathbf {E} (J_{i_{1}i_{2}\cdots i_{m}}^{2})=1$. Then the SYK model is defined as $H_{\rm {SYK}}=i^{m/2}\sum _{1\leq i_{1}<\cdots <i_{m}\leq n}J_{i_{1}i_{2}\cdots i_{m}}\psi _{i_{1}}\psi _{i_{2}}\cdots \psi _{i_{m}}$. Note that sometimes an extra normalization factor is included. The most famous model is when $m=4$: $H_{\rm {SYK}}=-{\frac {1}{4!}}\sum _{i_{1},\dotsc ,i_{4}=1}^{n}J_{i_{1}i_{2}i_{3}i_{4}}\psi _{i_{1}}\psi _{i_{2}}\psi _{i_{3}}\psi _{i_{4}}$, where the factor $1/4!$ is included to coincide with the most popular form. See also • Non-Fermi liquid References 1. Sachdev, Subir; Ye, Jinwu (1993-05-24). "Gapless spin-fluid ground state in a random quantum Heisenberg magnet". Physical Review Letters. 70 (21): 3339–3342. arXiv:cond-mat/9212030. Bibcode:1993PhRvL..70.3339S. doi:10.1103/PhysRevLett.70.3339. PMID 10053843. S2CID 1103248. 2. "Alexei Kitaev, Caltech & KITP, A simple model of quantum holography (part 1)". online.kitp.ucsb.edu. Retrieved 2019-11-02. 3. "Alexei Kitaev, Caltech, A simple model of quantum holography (part 2)". online.kitp.ucsb.edu. Retrieved 2019-11-02. 4. Chew, Aaron; Essin, Andrew; Alicea, Jason (2017-09-29). "Approximating the Sachdev-Ye-Kitaev model with Majorana wires". Phys. Rev. B. 96 (12): 121119. arXiv:1703.06890. Bibcode:2017PhRvB..96l1119C. doi:10.1103/PhysRevB.96.121119. S2CID 119222270. 5. Chen, Anffany; Ilan, R.; Juan, F.; Pikulin, D.I.; Franz, M. (2018-06-18). "Quantum Holography in a Graphene Flake with an Irregular Boundary". Phys. Rev. Lett. 121 (3): 036403. arXiv:1802.00802. Bibcode:2018PhRvL.121c6403C. doi:10.1103/PhysRevLett.121.036403. PMID 30085787. S2CID 51940526. 6. Wei, Chenan; Sedrakyan, Tigran (2021-01-29). "Optical lattice platform for the Sachdev-Ye-Kitaev model". Phys. Rev. A. 103 (1): 013323. arXiv:2005.07640. Bibcode:2021PhRvA.103a3323W. doi:10.1103/PhysRevA.103.013323. S2CID 234363891. 7. García-Álvarez, L.; Egusquiza, I.L.; Lamata, L.; del Campo, A.; Sonner, J.; Solano, E. (2017). "Digital Quantum Simulation of Minimal AdS/CFT". Physical Review Letters. 119 (4): 040501. arXiv:1607.08560. Bibcode:2017PhRvL.119d0501G. doi:10.1103/PhysRevLett.119.040501. PMID 29341740. S2CID 5144368. 8. Luo, Z.; You, Y.-Z.; Li, J.; Jian, C.-M.; Lu, D.; Xu, C.; Zeng, B.; Laflamme, R. (2019). "Quantum simulation of the non-fermi-liquid state of Sachdev-Ye-Kitaev model". npj Quantum Information. 5: 53. arXiv:1712.06458. Bibcode:2019npjQI...5...53L. doi:10.1038/s41534-019-0166-7. S2CID 195344916.
Wikipedia
Sacks property In mathematical set theory, the Sacks property holds between two models of Zermelo–Fraenkel set theory if they are not "too dissimilar" in the following sense. For $M$ and $N$ transitive models of set theory, $N$ is said to have the Sacks property over $M$ if and only if for every function $g\in M$ mapping $\omega $ to $\omega \setminus \{0\}$ such that $g$ diverges to infinity, and every function $f\in N$ mapping $\omega $ to $\omega $ there is a tree $T\in M$ such that for every $n$ the $n^{th}$ level of $T$ has cardinality at most $g(n)$ and $f$ is a branch of $T$.[1] The Sacks property is used to control the value of certain cardinal invariants in forcing arguments. It is named for Gerald Enoch Sacks. A forcing notion is said to have the Sacks property if and only if the forcing extension has the Sacks property over the ground model. Examples include Sacks forcing and Silver forcing. Shelah proved that when proper forcings with the Sacks property are iterated using countable supports, the resulting forcing notion will have the Sacks property as well.[2][3] The Sacks property is equivalent to the conjunction of the Laver property and the ${}^{\omega }\omega $-bounding property. References 1. Shelah, Saharon (2001), "Consistently there is no non trivial ccc forcing notion with the Sacks or Laver property", Combinatorica, 21 (2): 309–319, arXiv:math/0003139, doi:10.1007/s004930100027, MR 1832454. 2. Shelah, Saharon (1998), Proper and improper forcing, Perspectives in Mathematical Logic (2nd ed.), Springer-Verlag, Berlin, doi:10.1007/978-3-662-12831-2, ISBN 3-540-51700-6, MR 1623206. 3. Schlindwein, Chaz (2014), "Understanding preservation theorems: chapter VI of Proper and improper forcing, I", Archive for Mathematical Logic, 53 (1–2): 171–202, arXiv:1305.5906, doi:10.1007/s00153-013-0361-8, MR 3151404
Wikipedia
Sacred Mathematics Sacred Mathematics: Japanese Temple Geometry is a book on Sangaku, geometry problems presented on wooden tablets as temple offerings in the Edo period of Japan. It was written by Fukagawa Hidetoshi and Tony Rothman, and published in 2008 by the Princeton University Press. It won the PROSE Award of the Association of American Publishers in 2008 as the best book in mathematics for that year.[1] Topics The book begins with an introduction to Japanese culture and how this culture led to the production of Sangaku tablets, depicting geometry problems, their presentation as votive offerings at temples, and their display at the temples.[2][3] It also includes a chapter on the Chinese origins of Japanese mathematics, and a chapter on biographies of Japanese mathematicians from the time.[4] The Sangaku tablets illustrate theorems in Euclidean geometry, typically involving circles or ellipses, often with a brief textual explanation. They are presented as puzzles for the viewer to prove, and in many cases the proofs require advanced mathematics.[5] In some cases, booklets providing a solution were included separately,[6] but in many cases the original solution has been lost or was never provided.[7] The book's main content is the depiction, explanation, and solution of over 100 of these Sangaku puzzles, ranked by their difficulty,[2][3][7] selected from over 1800 catalogued Sangaku and over 800 surviving examples.[5] The solutions given use modern mathematical techniques where appropriate rather than attempting to model how the problems would originally have been solved.[4][8] Also included is a translation of the travel diary of Japanese mathematician Yamaguchi Kanzan (or Kazu), who visited many of the temples where these tablets were displayed and in doing so built a collection of problems from them.[2][3][4] The final three chapters provide a scholarly appraisal of precedence in mathematical discoveries between Japan and the west, and an explanation of the techniques that would have been available to Japanese problem-solvers of the time, in particular discussing how they would have solved problems that in western mathematics would have been solved using calculus or inversive geometry.[4] Audience and reception Sacred Geometry can be read by historians of mathematics, professional mathematicians, "people who are simply interested in geometry", and "anyone who likes mathematics", and the puzzles it presents also span a wide range of expertise.[6] Readers are not expected to already have a background in Japanese culture and history. The book is heavily illustrated, with many color photographs, also making it suitable as a mathematical coffee table book despite the depth of the mathematics it discusses.[4][7] Reviewer Paul J. Campbell calls this book "the most thorough account of Japanese temple geometry available",[2] reviewer Jean-Claude Martzloff calls it "exquisite, artfull, well-thought and particularly well-documented",[3], reviewer Frank J. Swetz calls it "a well crafted work that combines mathematics, history and cultural considerations into an intriguing narrative",[9] and reviewer Noel J. Pinnington calls it "excellent and well-thought-out" However, Pinnington points out that it lacks the citations and bibliography that would be necessary in a work of serious historical scholarship.[4] Reviewer Peter Lu also criticizes the book's review of Japanese culture as superficial and romanticized, based on the oversimplification that the culture was born out of Japan's isolation and uninfluenced by the later mathematics of the west.[8] Related works This is the third English-language book on Japanese mathematics from Fukagawa; the first two were Japanese Temple Geometry Problems (with Daniel Pedoe, 1989) and Traditional Japanese Mathematics Problems from the 18th and 19th Centuries (with John Rigby, 2002).[5][9] Sacred Mathematics expands on a 1998 article on Sangaku by Fukagawa and Rothman in Scientific American.[5] References 1. "2008 Winners", PROSE Awards, Association of American Publishers, retrieved 2020-03-17 2. Campbell, Paul J. (October 2008), "Review of Sacred Mathematics", Mathematics Magazine, 81 (4): 310–311, doi:10.1080/0025570X.2008.11953570, JSTOR 27643131, S2CID 218543493 3. Martzloff, J.-C., "Review of Sacred Mathematics", zbMATH, Zbl 1153.01006 4. Pinnington, Noel J. (Spring 2009), "Review of Sacred Mathematics", Monumenta Nipponica, 64 (1): 174–177, JSTOR 40540301 5. Constant, Jean (February 2017), "Review of Sacred Mathematics", The Mathematical Intelligencer, 39 (4): 83–85, doi:10.1007/s00283-016-9704-8, S2CID 125699968 6. Corbett, Leslie P. (October 2009), "Review of Sacred Mathematics", The Mathematics Teacher, 103 (3): 230, JSTOR 20876591 7. Schaefer, Marvin (December 2008), "Review of Sacred Mathematics", MAA Reviews, Mathematical Association of America 8. Lu, Peter J. (August 2008), "The blossoming of Japanese mathematics", Nature, 454 (7208): 1050, Bibcode:2008Natur.454.1050L, doi:10.1038/4541050a 9. Swetz, Frank J. (September 2008), "Review of Sacred Mathematics", Convergence, Mathematical Association of America, doi:10.4169/loci002864
Wikipedia
Sacred geometry Sacred geometry ascribes symbolic and sacred meanings to certain geometric shapes and certain geometric proportions.[1] It is associated with the belief of a divine creator of the universal geometer. The geometry used in the design and construction of religious structures such as churches, temples, mosques, religious monuments, altars, and tabernacles has sometimes been considered sacred. The concept applies also to sacred spaces such as temenoi, sacred groves, village greens, pagodas and holy wells, Mandala Gardens and the creation of religious and spiritual art. As worldview and cosmology Further information: Mathematics and art The belief that a god created the universe according to a geometric plan has ancient origins. Plutarch attributed the belief to Plato, writing that "Plato said god geometrizes continually" (Convivialium disputationum, liber 8,2). In modern times, the mathematician Carl Friedrich Gauss adapted this quote, saying "God arithmetizes".[2] Johannes Kepler (1571–1630) believed in the geometric underpinnings of the cosmos.[3] Harvard mathematician Shing-Tung Yau expressed a belief in the centrality of geometry in 2010: "Lest one conclude that geometry is little more than a well-calibrated ruler – and this is no knock against the ruler, which happens to be a technology I admire – geometry is one of the main avenues available to us for probing the universe. Physics and cosmology have been, almost by definition, absolutely crucial for making sense of the universe. Geometry's role in this may be less obvious, but is equally vital. I would go so far as to say that geometry not only deserves a place at the table alongside physics and cosmology, but in many ways it is the table."[4] Natural forms Further information: Patterns in nature According to Stephen Skinner, the study of sacred geometry has its roots in the study of nature, and the mathematical principles at work therein.[5] Many forms observed in nature can be related to geometry; for example, the chambered nautilus grows at a constant rate and so its shell forms a logarithmic spiral to accommodate that growth without changing shape. Also, honeybees construct hexagonal cells to hold their honey. These and other correspondences are sometimes interpreted in terms of sacred geometry and considered to be further proof of the natural significance of geometric forms. Representations in Art and architecture Further information: Mathematics and architecture, Mathematics and art, and Islamic geometric patterns Geometric ratios, and geometric figures were often employed in the designs of ancient Egyptian, ancient Indian, Greek and Roman architecture. Medieval European cathedrals also incorporated symbolic geometry. Indian and Himalayan spiritual communities often constructed temples and fortifications on design plans of mandala and yantra. Mandala Vaatikas or Sacred Gardens were designed using the same principles. Many of the sacred geometry principles of the human body and of ancient architecture were compiled into the Vitruvian Man drawing by Leonardo da Vinci. The latter drawing was itself based on the much older writings of the Roman architect Vitruvius. In Buddhism Mandalas are made up of a compilation of geometric shapes. In Buddhism, it is made up of concentric circles and squares that are equally placed from the center. Located within the geometric configurations are deities or suggestions of the deity, such as in the form of a symbol.[6] This is because Buddhists believe that deities can actually manifest inside the mandala.[7] Mandalas can be created with a variety of mediums. Tibetan Buddhists create mandalas out of sand that are then ritually destroyed. In order to create the mandala, two lines are first drawn on a predetermined grid.[6] The lines, known as Brahman lines, must overlap at the precisely calculated center of the grid. The mandala is then divided into thirteen equal parts not by a mathematical calculation, but through trial and error.[7] Next, monks purify the grid to prepare it for the constructing of the deities before sand is finally added. Tibetan Buddhists believe that anyone who looks at the mandala will receive positive energy and be blessed. Due to the Buddhist belief in impermanence, the mandala is eventually dismantled and is ritualistically released into the world.[7] In Chinese spiritual traditions One of the cornerstones of Chinese folk religion is the relationship between man and nature. This is epitomized in feng shui, which are architectural principles outlining the design plans of buildings in order to optimize the harmony of man and nature through the movement of Chi, or “life-generating energy.” [8] In order to maximize the flow of Chi throughout a building, its design plan must utilize specific shapes. Rectangles and squares are considered to be the best shapes to use in feng shui design. This is because other shapes may obstruct the flow of Chi from one room to the next due to what are considered to be unnatural angles.[8] Room layout is also an important element, as doors should be proportional to one another and located at appropriate positions throughout the house. Typically, doors are not situated across from one another because it may cause Chi to flow too fast from one room to the next.[8] The Forbidden City is an example of a building that uses sacred geometry through the principles of feng shui in its design plan. It is laid out in the shape of a rectangle that measures over half a mile long and about half a mile wide.[9] Furthermore, the Forbidden City constructed its most important buildings on a central axis. The Hall of Supreme Harmony, which was the Emperor’s throne room, is located at the midpoint or “epicenter” of the central axis. This was done intentionally, as it was meant to show that when the Emperor entered this room, he would be ceremonially transformed into the center of the universe.[9] In Islam The geometric designs in Islamic art are often built on combinations of repeated squares and circles, which may be overlapped and interlaced, as can arabesques (with which they are often combined), to form intricate and complex patterns, including a wide variety of tessellations. These may constitute the entire decoration, may form a framework for floral or calligraphic embellishments, or may retreat into the background around other motifs. The complexity and variety of patterns used evolved from simple stars and lozenges in the ninth century, through a variety of 6- to 13-point patterns by the 13th century, and finally to include also 14- and 16-point stars in the sixteenth century. Geometric patterns occur in a variety of forms in Islamic art and architecture including kilim carpets, Persian girih and Moroccan/Algerian zellige tilework, muqarnas decorative vaulting, jali pierced stone screens, ceramics, leather, stained glass, woodwork, and metalwork. Islamic geometric patterns are used in the Quran, Mosques and even in the calligraphies. In Hinduism The Agamas are a collection of Sanskrit,[10] Tamil, and Grantha[11] scriptures chiefly constituting the methods of temple construction and creation of idols, worship means of deities, philosophical doctrines, meditative practices, attainment of sixfold desires, and four kinds of yoga.[10] Elaborate rules are laid out in the Agamas for Shilpa (the art of sculpture) describing the quality requirements of such matters as the places where temples are to be built, the kinds of image to be installed, the materials from which they are to be made, their dimensions, proportions, air circulation, and lighting in the temple complex. The Manasara and Silpasara are works that deal with these rules. The rituals of daily worship at the temple also follow rules laid out in the Agamas. Hindu temples, the symbolic representation of cosmic model is then projected onto Hindu temples using the Vastu Shastra principle of Sukha Darshan, which states that smaller parts of the temple should be self-similar and a replica of the whole. The repetition of these replication parts symbolizes the natural phenomena of fractal patterns found in nature. These patterns make up the exterior of Hindu temples. Each element and detail are proportional to each other, this occurrence is also known as the sacred geometry.[12] In Christianity The construction of Medieval European cathedrals was often based on geometries intended to make the viewer see the world through mathematics, and through this understanding, gain a better understanding of the divine.[13] These churches frequently featured a Latin Cross floor-plan.[14] At the beginning of the Renaissance in Europe, views shifted to favor simple and regular geometries. The circle in particular became a central and symbolic shape for the base of buildings, as it represented the perfection of nature and the centrality of man's place in the universe.[14] The use of the circle and other simple and symmetrical geometric shapes was solidified as a staple of Renaissance sacred architecture in Leon Battista Alberti's architectural treatise, which described the ideal church in terms of spiritual geometry.[15] In the High Middle Ages, leading Christian philosophers explained the layout of the universe in terms of a microcosm analogy. In her book describing the divine visions she witnessed, Hildegard of Bingen explains that she saw an outstretched human figure located within a circular orb.[16] When interpreted by theologians, the human figure was Christ and mankind showing the Earthly realm and the circumference of the circle was a representation of the universe. Some images also show above the universe a depiction of God.[16] This is thought to later have inspired Da Vinci’s Vitruvian Man. Dante uses circles to make up the nine layers of hell categorized in his book, The Divine Comedy. “Celestial spheres” are also utilized to make up the nine layers of Paradise.[17] He further creates a cosmic order of circular forms that stretches from Jerusalem in the Earthly realm up to God in Heaven.[17] This cosmology is believed to have been inspired by the ancient astronomer Ptolemy.[17] Unanchored geometry Stephen Skinner criticizes the tendency of some writers to place a geometric diagram over virtually any image of a natural object or human created structure, find some lines intersecting the image and declare it based on sacred geometry. If the geometric diagram does not intersect major physical points in the image, the result is what Skinner calls "unanchored geometry".[18] See also • Circle dance • Golden Ratio • Harmony of the spheres • Lu Ban and Feng shui • Magic circle • Numerology • Shield of the Trinity • 108 (number) References 1. "Polygons, Tilings, & Sacred Geometry". Archived from the original on February 7, 2005. 2. Cathérine Goldstein, Norbert Schappacher, Joachim Schwermer, The shaping of arithmetic, p. 235. 3. Calter, Paul (1998). "Celestial Themes in Art & Architecture". Dartmouth College. Retrieved 5 September 2015. 4. Shing-Tung Yau and Steve Nadis, The Shape of Inner Space, (New York: Basic Books, 2010), 18. 5. Skinner, Stephen (2009). Sacred Geometry: Deciphering the Code. Sterling. ISBN 978-1-4027-6582-7. 6. Brauen, Martin; Rubin Museum of Art (2009). The mandala in Tibetan Buddhism from the book Mandala: Sacred circle in Tibetan Buddhism (Rev. and updated.). New York, N.Y.: Rubin Museum of Art. p. 11. 7. Sahney, Puja (2006). "In the midst of a monastery: Filming the making of a Buddhist sand mandala". Voices (New York Folklore Society). 32 (1–2): 23 – via Proquest. 8. Çeliker, Afet; Çavuşoğlu, Banu Tevfikler; Öngül, Zehra (2014). "Comparative study of courtyard housing using feng shui". Open House International. 39 (1): 41. 9. Walker, Veronica (2022). "The Forbidden City: Center of an imperial world". National Geographic. Vol. 8, no. 4. p. 60. 10. Grimes, John A. (1996). A Concise Dictionary of Indian Philosophy: Sanskrit Terms Defined in English. State University of New York Press. ISBN 9780791430682. LCCN 96012383. 11. Nagalingam, Pathmarajah (2009). The Religion of the Agamas. Siddhanta Publications. 12. "Sacred Geometry Of Hindu Temples". Indic Today. 2019-10-22. Retrieved 2021-04-14. 13. Petersen, Toni (2003), "A(rt and) A(rchitecture) T(hesaurus)", Oxford Art Online, Oxford University Press, doi:10.1093/gao/9781884446054.article.t000037 14. CUMMINGS, L.A. (1986), "A RECURRING GEOMETRICAL PATTERN IN THE EARLY RENAISSANCE IMAGINATION", Symmetry, Elsevier, pp. 981–997, doi:10.1016/b978-0-08-033986-3.50067-7, ISBN 9780080339863 15. Rudolf., Wittkower (1998). Architectural principles in the age of humanism. Academy Editions. ISBN 978-0471977636. OCLC 981109542. 16. Lester, Toby (2012). Da Vinci’s Ghost: Genius, Obsession, and How Leonardo Created the World in his Own Image. New York: Free Press. p. 50. 17. Pagano, Alessandra; Dalena, Matteo (2022). "Dante: 700 years of the Inferno". National Geographic. Vol. 8, no. 4. p. 40. 18. Skinner, Stephen (2006). Stephen Skinner, Sacred geometry: deciphering the code, p91. ISBN 9781402741296. Further reading • Bain, George. Celtic Art: The Methods of Construction. Dover, 1973. ISBN 0-486-22923-8. • Bromwell, Henry P. H. (2010). Townley, Kevin (ed.). Restorations of Masonic Geometry and Symbolry: Being a Dissertation on the Lost Knowledges of the Lodge. Lovers of the Craft. ISBN 978-0-9713441-5-0. Archived from the original on 2012-02-03. Retrieved Jan 7, 2012. • Bamford, Christopher, Homage to Pythagoras: Rediscovering Sacred Science, Lindisfarne Press, 1994, ISBN 0-940262-63-0 • Critchlow, Keith (1970). Order In Space: A Design Source Book. New York: Viking. • Critchlow, Keith (1976). Islamic Patterns: An Analytical and Cosmological Approach. Schocken Books. ISBN 978-0-8052-3627-9. • Iamblichus; Robin Waterfield; Keith Critchlow; Translated by Robin Waterfield (1988). The Theology of Arithmetic: On the Mystical, Mathematical and Cosmological Symbolism of the First Ten Numbers. Phanes Press. ISBN 978-0-933999-72-5. • Johnson, Anthony: Solving Stonehenge, the New Key to an Ancient Enigma. Thames & Hudson 2008 ISBN 978-0-500-05155-9 • Lesser, George (1957–64). Gothic cathedrals and sacred geometry. London: A. Tiranti. • Lawlor, Robert. Sacred Geometry: Philosophy and practice (Art and Imagination). Thames & Hudson, 1989 (1st edition 1979, 1980, or 1982). ISBN 0-500-81030-3. • Lippard, Lucy R. Overlay: Contemporary Art and the Art of Prehistory. Pantheon Books New York 1983 ISBN 0-394-51812-8 • Mann, A. T. Sacred Architecture, Element Books, 1993, ISBN 1-84333-355-4. • Michell, John. City of Revelation. Abacus, 1972. ISBN 0-349-12320-9. • Schneider, Michael S. A Beginner's Guide to Constructing the Universe: Mathematical Archetypes of Nature, Art, and Science. Harper, 1995. ISBN 0-06-092671-6 • Steiner, Rudolf; Creeger, Catherine (2001). The Fourth Dimension : Sacred Geometry, Alchemy, and Mathematics. Anthroposophic Press. ISBN 978-0-88010-472-2. • The Golden Mean, Parabola magazine, v.16, n.4 (1991) • West, John Anthony, Inaugural Lines: Sacred geometry at St. John the Divine, Parabola magazine, v.8, n.1, Spring 1983. External links Wikimedia Commons has media related to Sacred geometry. • Sacred geometry at Curlie Hidden messages Main • Subliminal message Audio • Backmasking • Hidden track • Phonetic reversal • Reverse speech Numeric • Chronogram • Numerology • Theomatics • Bible code • Cryptology Visual • Fnord • Hidden text • Paranoiac-critical method • Pareidolia • Psychorama • Sacred geometry • Steganography • Visual cryptography Other • Apophenia • Asemic writing • Clustering illusion • Cryptic crossword • Anagram • Easter egg • Observer-expectancy effect • Pattern recognition • Palindrome • Simulacrum • Synchronicity • Unconscious mind Mathematics and art Concepts • Algorithm • Catenary • Fractal • Golden ratio • Hyperboloid structure • Minimal surface • Paraboloid • Perspective • Camera lucida • Camera obscura • Plastic number • Projective geometry • Proportion • Architecture • Human • Symmetry • Tessellation • Wallpaper group Forms • Algorithmic art • Anamorphic art • Architecture • Geodesic dome • Islamic • Mughal • Pyramid • Vastu shastra • Computer art • Fiber arts • 4D art • Fractal art • Islamic geometric patterns • Girih • Jali • Muqarnas • Zellij • Knotting • Celtic knot • Croatian interlace • Interlace • Music • Origami • Sculpture • String art • Tiling Artworks • List of works designed with the golden ratio • Continuum • Mathemalchemy • Mathematica: A World of Numbers... and Beyond • Octacube • Pi • Pi in the Sky Buildings • Cathedral of Saint Mary of the Assumption • Hagia Sophia • Pantheon • Parthenon • Pyramid of Khufu • Sagrada Família • Sydney Opera House • Taj Mahal Artists Renaissance • Paolo Uccello • Piero della Francesca • Leonardo da Vinci • Vitruvian Man • Albrecht Dürer • Parmigianino • Self-portrait in a Convex Mirror 19th–20th Century • William Blake • The Ancient of Days • Newton • Jean Metzinger • Danseuse au café • L'Oiseau bleu • Giorgio de Chirico • Man Ray • M. C. Escher • Circle Limit III • Print Gallery • Relativity • Reptiles • Waterfall • René Magritte • La condition humaine • Salvador Dalí • Crucifixion • The Swallow's Tail • Crockett Johnson Contemporary • Max Bill • Martin and Erik Demaine • Scott Draves • Jan Dibbets • John Ernest • Helaman Ferguson • Peter Forakis • Susan Goldstine • Bathsheba Grossman • George W. Hart • Desmond Paul Henry • Anthony Hill • Charles Jencks • Garden of Cosmic Speculation • Andy Lomas • Robert Longhurst • Jeanette McLeod • Hamid Naderi Yeganeh • István Orosz • Hinke Osinga • Antoine Pevsner • Tony Robbin • Alba Rojo Cama • Reza Sarhangi • Oliver Sin • Hiroshi Sugimoto • Daina Taimiņa • Roman Verostko • Margaret Wertheim Theorists Ancient • Polykleitos • Canon • Vitruvius • De architectura Renaissance • Filippo Brunelleschi • Leon Battista Alberti • De pictura • De re aedificatoria • Piero della Francesca • De prospectiva pingendi • Luca Pacioli • De divina proportione • Leonardo da Vinci • A Treatise on Painting • Albrecht Dürer • Vier Bücher von Menschlicher Proportion • Sebastiano Serlio • Regole generali d'architettura • Andrea Palladio • I quattro libri dell'architettura Romantic • Samuel Colman • Nature's Harmonic Unity • Frederik Macody Lund • Ad Quadratum • Jay Hambidge • The Greek Vase Modern • Owen Jones • The Grammar of Ornament • Ernest Hanbury Hankin • The Drawing of Geometric Patterns in Saracenic Art • G. H. Hardy • A Mathematician's Apology • George David Birkhoff • Aesthetic Measure • Douglas Hofstadter • Gödel, Escher, Bach • Nikos Salingaros • The 'Life' of a Carpet Publications • Journal of Mathematics and the Arts • Lumen Naturae • Making Mathematics with Needlework • Rhythm of Structure • Viewpoints: Mathematical Perspective and Fractal Geometry in Art Organizations • Ars Mathematica • The Bridges Organization • European Society for Mathematics and the Arts • Goudreau Museum of Mathematics in Art and Science • Institute For Figuring • Mathemalchemy • National Museum of Mathematics Related • Droste effect • Mathematical beauty • Patterns in nature • Sacred geometry • Category
Wikipedia
Saddle point In mathematics, a saddle point or minimax point[1] is a point on the surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a critical point), but which is not a local extremum of the function.[2] An example of a saddle point is when there is a critical point with a relative minimum along one axial direction (between peaks) and at a relative maximum along the crossing axis. However, a saddle point need not be in this form. For example, the function $f(x,y)=x^{2}+y^{3}$ has a critical point at $(0,0)$ that is a saddle point since it is neither a relative maximum nor relative minimum, but it does not have a relative maximum or relative minimum in the $y$-direction. The name derives from the fact that the prototypical example in two dimensions is a surface that curves up in one direction, and curves down in a different direction, resembling a riding saddle or a mountain pass between two peaks forming a landform saddle. In terms of contour lines, a saddle point in two dimensions gives rise to a contour map with a pair of lines intersecting at the point. Such intersections are rare in actual ordnance survey maps, as the height of the saddle point is unlikely to coincide with the integer multiples used in such maps. Instead, the saddle point appears as a blank space in the middle of four sets of contour lines that approach and veer away from it. For a basic saddle point, these sets occur in pairs, with an opposing high pair and an opposing low pair positioned in orthogonal directions. The critical contour lines generally do not have to intersect orthogonally. Mathematical discussion A simple criterion for checking if a given stationary point of a real-valued function F(x,y) of two real variables is a saddle point is to compute the function's Hessian matrix at that point: if the Hessian is indefinite, then that point is a saddle point. For example, the Hessian matrix of the function $z=x^{2}-y^{2}$ at the stationary point $(x,y,z)=(0,0,0)$ is the matrix ${\begin{bmatrix}2&0\\0&-2\\\end{bmatrix}}$ which is indefinite. Therefore, this point is a saddle point. This criterion gives only a sufficient condition. For example, the point $(0,0,0)$ is a saddle point for the function $z=x^{4}-y^{4},$ but the Hessian matrix of this function at the origin is the null matrix, which is not indefinite. In the most general terms, a saddle point for a smooth function (whose graph is a curve, surface or hypersurface) is a stationary point such that the curve/surface/etc. in the neighborhood of that point is not entirely on any side of the tangent space at that point. In a domain of one dimension, a saddle point is a point which is both a stationary point and a point of inflection. Since it is a point of inflection, it is not a local extremum. Saddle surface A saddle surface is a smooth surface containing one or more saddle points. Classical examples of two-dimensional saddle surfaces in the Euclidean space are second order surfaces, the hyperbolic paraboloid $z=x^{2}-y^{2}$ (which is often referred to as "the saddle surface" or "the standard saddle surface") and the hyperboloid of one sheet. The Pringles potato chip or crisp is an everyday example of a hyperbolic paraboloid shape. Saddle surfaces have negative Gaussian curvature which distinguish them from convex/elliptical surfaces which have positive Gaussian curvature. A classical third-order saddle surface is the monkey saddle.[3] Examples In a two-player zero sum game defined on a continuous space, the equilibrium point is a saddle point. For a second-order linear autonomous system, a critical point is a saddle point if the characteristic equation has one positive and one negative real eigenvalue.[4] In optimization subject to equality constraints, the first-order conditions describe a saddle point of the Lagrangian. Other uses In dynamical systems, if the dynamic is given by a differentiable map f then a point is hyperbolic if and only if the differential of ƒ n (where n is the period of the point) has no eigenvalue on the (complex) unit circle when computed at the point. Then a saddle point is a hyperbolic periodic point whose stable and unstable manifolds have a dimension that is not zero. A saddle point of a matrix is an element which is both the largest element in its column and the smallest element in its row. See also • Saddle-point method is an extension of Laplace's method for approximating integrals • Maximum and minimum • Derivative test • Hyperbolic equilibrium point • Hyperbolic geometry • Minimax theorem • Max–min inequality • Monkey saddle • Mountain pass theorem References Citations 1. Howard Anton, Irl Bivens, Stephen Davis (2002): Calculus, Multivariable Version, p. 844. 2. Chiang, Alpha C. (1984). Fundamental Methods of Mathematical Economics (3rd ed.). New York: McGraw-Hill. p. 312. ISBN 0-07-010813-7. 3. Buck, R. Creighton (2003). Advanced Calculus (3rd ed.). Long Grove, IL: Waveland Press. p. 160. ISBN 1-57766-302-0. 4. von Petersdorff 2006 Sources • Gray, Lawrence F.; Flanigan, Francis J.; Kazdan, Jerry L.; Frank, David H.; Fristedt, Bert (1990), Calculus two: linear and nonlinear functions, Berlin: Springer-Verlag, p. 375, ISBN 0-387-97388-5 • Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York, NY: Chelsea, ISBN 978-0-8284-1087-8 • von Petersdorff, Tobias (2006), "Critical Points of Autonomous Systems", Differential Equations for Scientists and Engineers (Math 246 lecture notes) • Widder, D. V. (1989), Advanced calculus, New York, NY: Dover Publications, p. 128, ISBN 0-486-66103-2 • Agarwal, A., Study on the Nash Equilibrium (Lecture Notes) Further reading • Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9. External links • Media related to Saddle point at Wikimedia Commons
Wikipedia
Karush–Kuhn–Tucker conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a (global) saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.[1] The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951.[2] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.[3][4] Nonlinear optimization problem Consider the following nonlinear optimization problem in standard form: minimize $f(\mathbf {x} )$ subject to $g_{i}(\mathbf {x} )\leq 0,$ $h_{j}(\mathbf {x} )=0.$ where $\mathbf {x} \in \mathbf {X} $ is the optimization variable chosen from a convex subset of $\mathbb {R} ^{n}$, $f$ is the objective or utility function, $g_{i}\ (i=1,\ldots ,m)$ are the inequality constraint functions and $h_{j}\ (j=1,\ldots ,\ell )$ are the equality constraint functions. The numbers of inequalities and equalities are denoted by $m$ and $\ell $ respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function ${\mathcal {L}}(\mathbf {x} ,\mathbf {\mu } ,\mathbf {\lambda } )=f(\mathbf {x} )+\mathbf {\mu } ^{\top }\mathbf {g} (\mathbf {x} )+\mathbf {\lambda } ^{\top }\mathbf {h} (\mathbf {x} )=L(\mathbf {x} ,\mathbf {\alpha } )=f(\mathbf {x} )+\mathbf {\alpha } ^{\top }{\begin{pmatrix}\mathbf {g} (\mathbf {x} )\\\mathbf {h} (\mathbf {x} )\end{pmatrix}}$ where $\mathbf {g} \left(\mathbf {x} \right)={\begin{bmatrix}g_{1}\left(\mathbf {x} \right)\\\vdots \\g_{i}\left(\mathbf {x} \right)\\\vdots \\g_{m}\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {h} \left(\mathbf {x} \right)={\begin{bmatrix}h_{1}\left(\mathbf {x} \right)\\\vdots \\h_{j}\left(\mathbf {x} \right)\\\vdots \\h_{\ell }\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {\mu } ={\begin{bmatrix}\mu _{1}\\\vdots \\\mu _{i}\\\vdots \\\mu _{m}\\\end{bmatrix}},\quad \mathbf {\lambda } ={\begin{bmatrix}\lambda _{1}\\\vdots \\\lambda _{j}\\\vdots \\\lambda _{\ell }\end{bmatrix}}\quad {\text{and}}\quad \mathbf {\alpha } ={\begin{bmatrix}\mu \\\lambda \end{bmatrix}}.$ The Karush–Kuhn–Tucker theorem then states the following. Theorem — (sufficiency) If $(\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })$ is a saddle point of $L(\mathbf {x} ,\mathbf {\alpha } )$ in $\mathbf {x} \in \mathbf {X} $, $\mathbf {\mu } \geq \mathbf {0} $, then $\mathbf {x} ^{\ast }$ is an optimal vector for the above optimization problem. (necessity) Suppose that $f(\mathbf {x} )$ and $g_{i}(\mathbf {x} )$, $i=1,\ldots ,m$, are convex in $\mathbf {X} $ and that there exists $\mathbf {x} _{0}\in \operatorname {relint} (\mathbf {X} )$ such that $\mathbf {g} (\mathbf {x} _{0})<\mathbf {0} $ (i.e., Slater's condition holds). Then with an optimal vector $\mathbf {x} ^{\ast }$ for the above optimization problem there is associated a vector $\mathbf {\alpha } ^{\ast }={\begin{bmatrix}\mu ^{*}\\\lambda ^{*}\end{bmatrix}}$ satisfying $\mathbf {\mu } ^{*}\geq \mathbf {0} $ such that $(\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })$ is a saddle point of $L(\mathbf {x} ,\mathbf {\alpha } )$.[5] Since the idea of this approach is to find a supporting hyperplane on the feasible set $\mathbf {\Gamma } =\left\{\mathbf {x} \in \mathbf {X} :g_{i}(\mathbf {x} )\leq 0,i=1,\ldots ,m\right\}$, the proof of the Karush–Kuhn–Tucker theorem makes use of the hyperplane separation theorem.[6] The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.[7] Necessary conditions Suppose that the objective function $f\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} $ and the constraint functions $g_{i}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} $ and $h_{j}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} $ have subderivatives at a point $x^{*}\in \mathbb {R} ^{n}$. If $x^{*}$ is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants $\mu _{i}\ (i=1,\ldots ,m)$ and $\lambda _{j}\ (j=1,\ldots ,\ell )$, called KKT multipliers, such that the following four groups of conditions hold:[8] Stationarity For minimizing $f(x)$: $\partial f(x^{*})+\sum _{j=1}^{\ell }\lambda _{j}\partial h_{j}(x^{*})+\sum _{i=1}^{m}\mu _{i}\partial g_{i}(x^{*})\ni \mathbf {0} $ For maximizing $f(x)$: $-\partial f(x^{*})+\sum _{j=1}^{\ell }\lambda _{j}\partial h_{j}(x^{*})+\sum _{i=1}^{m}\mu _{i}\partial g_{i}(x^{*})\ni \mathbf {0} $ Primal feasibility $h_{j}(x^{*})=0,{\text{ for }}j=1,\ldots ,\ell \,\!$ $g_{i}(x^{*})\leq 0,{\text{ for }}i=1,\ldots ,m$ Dual feasibility $\mu _{i}\geq 0,{\text{ for }}i=1,\ldots ,m$ Complementary slackness $\sum _{i=1}^{m}\mu _{i}g_{i}(x^{*})=0.$ The last condition is sometimes written in the equivalent form: $\mu _{i}g_{i}(x^{*})=0,{\text{ for }}i=1,\ldots ,m.$ In the particular case $m=0$, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers. Proof Theorem — (sufficiency) If there exists a solution $x^{*}$ to the primal problem, a solution $(\mu ^{*},\lambda ^{*})$ to the dual problem, such that together they satisfy the KKT conditions, then the problem pair has strong duality, and $x^{*},(\mu ^{*},\lambda ^{*})$ is a solution pair to the primal and dual problems. (necessity) If the problem pair has strong duality, then for any solution $x^{*}$ to the primal problem and any solution $(\mu ^{*},\lambda ^{*})$ to the dual problem, the pair $x^{*},(\mu ^{*},\lambda ^{*})$ must satisfy the KKT conditions.[9] Proof First, for the $x^{*},(\mu ^{*},\lambda ^{*})$ to satisfy the KKT conditions is equivalent to them being a Nash equilibrium. Fix $(\mu ^{*},\lambda ^{*})$, and vary $x$: equilibrium is equivalent to primal stationarity. Fix $x^{*}$, and vary $(\mu ,\lambda )$: equilibrium is equivalent to primal feasibility and complementary slackness. Sufficiency: the solution pair $x^{*},(\mu ^{*},\lambda ^{*})$ satisfies the KKT conditions, thus is a Nash equilibrium, and therefore closes the duality gap. Necessity: any solution pair $x^{*},(\mu ^{*},\lambda ^{*})$ must close the duality gap, thus they must constitute a Nash equilibrium (since neither side could do any better), thus they satisfy the KKT conditions. Interpretation: KKT conditions as balancing constraint-forces in state space The primal problem can be interpreted as moving a particle in the space of $x$, and subjecting it to three kinds of force fields: • $f$ is a potential field that the particle is minimizing. The force generated by $f$ is $-\partial f$. • $g_{i}$ are one-sided constraint surfaces. The particle is allowed to move inside $g_{i}\leq 0$, but whenever it touches $g_{i}=0$, it is pushed inwards. • $h_{j}$ are two-sided constraint surfaces. The particle is allowed to move only on the surface $h_{j}$. Primal stationarity states that the "force" of $\partial f(x^{*})$ is exactly balanced by a linear sum of forces $\partial h_{j}(x^{*})$ and $\partial g_{i}(x^{*})$. Dual feasibility additionally states that all the $\partial g_{i}(x^{*})$ forces must be one-sided, pointing inwards into the feasible set for $x$. Dual slackness states that if $g_{i}(x^{*})<0$, then the $\partial g_{i}(x^{*})$ force must be zero, since the particle is not on the boundary, the one-sided constraint force cannot activate. Matrix representation The necessary conditions can be written with Jacobian matrices of the constraint functions. Let $\mathbf {g} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}$ be defined as $\mathbf {g} (x)=\left(g_{1}(x),\ldots ,g_{m}(x)\right)^{\top }$ and let $\mathbf {h} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{\ell }$ be defined as $\mathbf {h} (x)=\left(h_{1}(x),\ldots ,h_{\ell }(x)\right)^{\top }$. Let ${\boldsymbol {\mu }}=\left(\mu _{1},\ldots ,\mu _{m}\right)^{\top }$ and ${\boldsymbol {\lambda }}=\left(\lambda _{1},\ldots ,\lambda _{\ell }\right)^{\top }$. Then the necessary conditions can be written as: Stationarity For maximizing $f(x)$: $\partial f(x^{*})-D\mathbf {g} (x^{*})^{\top }{\boldsymbol {\mu }}-D\mathbf {h} (x^{*})^{\top }{\boldsymbol {\lambda }}=\mathbf {0} $ For minimizing $f(x)$: $\partial f(x^{*})+D\mathbf {g} (x^{*})^{\top }{\boldsymbol {\mu }}+D\mathbf {h} (x^{*})^{\top }{\boldsymbol {\lambda }}=\mathbf {0} $ Primal feasibility $\mathbf {g} (x^{*})\leq \mathbf {0} $ $\mathbf {h} (x^{*})=\mathbf {0} $ Dual feasibility ${\boldsymbol {\mu }}\geq \mathbf {0} $ Complementary slackness ${\boldsymbol {\mu }}^{\top }\mathbf {g} (x^{*})=0.$ Regularity conditions (or constraint qualifications) One can ask whether a minimizer point $x^{*}$ of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer $x^{*}$ of a function $f(x)$ in an unconstrained problem has to satisfy the condition $\nabla f(x^{*})=0$. For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one: Constraint Acronym Statement Linearity constraint qualification LCQ If $g_{i}$ and $h_{j}$ are affine functions, then no other condition is needed. Linear independence constraint qualification LICQ The gradients of the active inequality constraints and the gradients of the equality constraints are linearly independent at $x^{*}$. Mangasarian-Fromovitz constraint qualification MFCQ The gradients of the equality constraints are linearly independent at $x^{*}$ and there exists a vector $d\in \mathbb {R} ^{n}$ such that $\nabla g_{i}(x^{*})^{\top }d<0$ for all active inequality constraints and $\nabla h_{j}(x^{*})^{\top }d=0$ for all equality constraints.[10] Constant rank constraint qualification CRCQ For each subset of the gradients of the active inequality constraints and the gradients of the equality constraints the rank at a vicinity of $x^{*}$ is constant. Constant positive linear dependence constraint qualification CPLD For each subset of gradients of active inequality constraints and gradients of equality constraints, if the subset of vectors is linearly dependent at $x^{*}$ with non-negative scalars associated with the inequality constraints, then it remains linearly dependent in a neighborhood of $x^{*}$. Quasi-normality constraint qualification QNCQ If the gradients of the active inequality constraints and the gradients of the equality constraints are linearly dependent at $x^{*}$ with associated multipliers $\lambda _{j}$ for equalities and $\mu _{i}\geq 0$ for inequalities, then there is no sequence $x_{k}\to x^{*}$ such that $\lambda _{j}\neq 0\Rightarrow \lambda _{j}h_{j}(x_{k})>0$ and $\mu _{i}\neq 0\Rightarrow \mu _{i}g_{i}(x_{k})>0.$ Slater's condition SC For a convex problem (i.e., assuming minimization, $f,g_{i}$ are convex and $h_{j}$ is affine), there exists a point $x$ such that $h(x)=0$ and $g_{i}(x)<0.$ The strict implications can be shown LICQ ⇒ MFCQ ⇒ CPLD ⇒ QNCQ and LICQ ⇒ CRCQ ⇒ CPLD ⇒ QNCQ In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems. Sufficient conditions In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name. The necessary conditions are sufficient for optimality if the objective function $f$ of a maximization problem is a differentiable concave function, the inequality constraints $g_{j}$ are differentiable convex functions, the equality constraints $h_{i}$ are affine functions, and Slater's condition holds.[11] Similarly, if the objective function $f$ of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality. It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.[12][13] Second-order sufficient conditions For smooth, non-linear optimization problems, a second order sufficient condition is given as follows. The solution $x^{*},\lambda ^{*},\mu ^{*}$ found in the above section is a constrained local minimum if for the Lagrangian, $L(x,\lambda ,\mu )=f(x)+\sum _{i=1}^{m}\mu _{i}g_{i}(x)+\sum _{j=1}^{\ell }\lambda _{j}h_{j}(x)$ then, $s^{T}\nabla _{xx}^{2}L(x^{*},\lambda ^{*},\mu ^{*})s\geq 0$ where $s\neq 0$ is a vector satisfying the following, $\left[\nabla _{x}g_{i}(x^{*}),\nabla _{x}h_{j}(x^{*})\right]^{T}s=0$ where only those active inequality constraints $g_{i}(x)$ corresponding to strict complementarity (i.e. where $\mu _{i}>0$) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict. If $s^{T}\nabla _{xx}^{2}L(x^{*},\lambda ^{*},\mu ^{*})s=0$, the third order Taylor expansion of the Lagrangian should be used to verify if $x^{*}$ is a local minimum. The minimization of $f(x_{1},x_{2})=(x_{2}-x_{1}^{2})(x_{2}-3x_{1}^{2})$ is a good counter-example, see also Peano surface. Economics Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example,[14] consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting $Q$ be the quantity of output produced (to be chosen), $R(Q)$ be sales revenue with a positive first derivative and with a zero value at zero output, $C(Q)$ be production costs with a positive first derivative and with a non-negative value at zero output, and $G_{\min }$ be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is Minimize $-R(Q)$ subject to $G_{\min }\leq R(Q)-C(Q)$ $Q\geq 0,$ and the KKT conditions are ${\begin{aligned}&\left({\frac {{\text{d}}R}{{\text{d}}Q}}\right)(1+\mu )-\mu \left({\frac {{\text{d}}C}{{\text{d}}Q}}\right)\leq 0,\\[5pt]&Q\geq 0,\\[5pt]&Q\left[\left({\frac {{\text{d}}R}{{\text{d}}Q}}\right)(1+\mu )-\mu \left({\frac {{\text{d}}C}{{\text{d}}Q}}\right)\right]=0,\\[5pt]&R(Q)-C(Q)-G_{\min }\geq 0,\\[5pt]&\mu \geq 0,\\[5pt]&\mu [R(Q)-C(Q)-G_{\min }]=0.\end{aligned}}$ Since $Q=0$ would violate the minimum profit constraint, we have $Q>0$ and hence the third condition implies that the first condition holds with equality. Solving that equality gives ${\frac {{\text{d}}R}{{\text{d}}Q}}={\frac {\mu }{1+\mu }}\left({\frac {{\text{d}}C}{{\text{d}}Q}}\right).$ Because it was given that ${\text{d}}R/{\text{d}}Q$ and ${\text{d}}C/{\text{d}}Q$ are strictly positive, this inequality along with the non-negativity condition on $\mu $ guarantees that $\mu $ is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue ${\text{d}}R/{\text{d}}Q$ is less than marginal cost ${\text{d}}C/{\text{d}}Q$ — a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal. Value function If we reconsider the optimization problem as a maximization problem with constant inequality constraints: ${\text{Maximize }}\;f(x)$ ${\text{subject to }}\ $ $g_{i}(x)\leq a_{i},h_{j}(x)=0.$ The value function is defined as $V(a_{1},\ldots ,a_{n})=\sup \limits _{x}f(x)$ ${\text{subject to }}\ $ $g_{i}(x)\leq a_{i},h_{j}(x)=0$ $j\in \{1,\ldots ,\ell \},i\in \{1,\ldots ,m\},$ so the domain of $V$ is $\{a\in \mathbb {R} ^{m}\mid {\text{for some }}x\in X,g_{i}(x)\leq a_{i},i\in \{1,\ldots ,m\}\}.$ Given this definition, each coefficient $\mu _{i}$ is the rate at which the value function increases as $a_{i}$ increases. Thus if each $a_{i}$ is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function $f$. This interpretation is especially important in economics and is used, for instance, in utility maximization problems. Generalizations With an extra multiplier $\mu _{0}\geq 0$, which may be zero (as long as $(\mu _{0},\mu ,\lambda )\neq 0$), in front of $\nabla f(x^{*})$ the KKT stationarity conditions turn into ${\begin{aligned}&\mu _{0}\,\nabla f(x^{*})+\sum _{i=1}^{m}\mu _{i}\,\nabla g_{i}(x^{*})+\sum _{j=1}^{\ell }\lambda _{j}\,\nabla h_{j}(x^{*})=0,\\[4pt]&\mu _{j}g_{i}(x^{*})=0,\quad i=1,\dots ,m,\end{aligned}}$ which are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition KKT or (not-MFCQ). The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives. See also • Farkas' lemma • Lagrange multiplier • The Big M method, for linear problems, which extends the simplex algorithm to problems that contain "greater-than" constraints. • Interior-point method a method to solve the KKT conditions. • Slack variable References 1. Tabak, Daniel; Kuo, Benjamin C. (1971). Optimal Control by Mathematical Programming. Englewood Cliffs, NJ: Prentice-Hall. pp. 19–20. ISBN 0-13-638106-5. 2. Kuhn, H. W.; Tucker, A. W. (1951). "Nonlinear programming". Proceedings of 2nd Berkeley Symposium. Berkeley: University of California Press. pp. 481–492. MR 0047303. 3. W. Karush (1939). Minima of Functions of Several Variables with Inequalities as Side Constraints (M.Sc. thesis). Dept. of Mathematics, Univ. of Chicago, Chicago, Illinois. 4. Kjeldsen, Tinne Hoff (2000). "A contextualized historical analysis of the Kuhn-Tucker theorem in nonlinear programming: the impact of World War II". Historia Math. 27 (4): 331–361. doi:10.1006/hmat.2000.2289. MR 1800317. 5. Walsh, G. R. (1975). "Saddle-point Property of Lagrangian Function". Methods of Optimization. New York: John Wiley & Sons. pp. 39–44. ISBN 0-471-91922-5. 6. Kemp, Murray C.; Kimura, Yoshio (1978). Introduction to Mathematical Economics. New York: Springer. pp. 38–44. ISBN 0-387-90304-6. 7. Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. p. 244. ISBN 0-521-83378-7. MR 2061575. 8. Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton, NJ: Princeton University Press. ISBN 978-0691119151. MR 2199043. 9. Geoff Gordon & Ryan Tibshirani. "Karush-Kuhn-Tucker conditions, Optimization 10-725 / 36-725" (PDF). Archived from the original (PDF) on 2022-06-17. 10. Dimitri Bertsekas (1999). Nonlinear Programming (2 ed.). Athena Scientific. pp. 329–330. ISBN 9781886529007. 11. Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. p. 244. ISBN 0-521-83378-7. MR 2061575. 12. Martin, D. H. (1985). "The Essence of Invexity". J. Optim. Theory Appl. 47 (1): 65–76. doi:10.1007/BF00941316. S2CID 122906371. 13. Hanson, M. A. (1999). "Invexity and the Kuhn-Tucker Theorem". J. Math. Anal. Appl. 236 (2): 594–604. doi:10.1006/jmaa.1999.6484. 14. Chiang, Alpha C. Fundamental Methods of Mathematical Economics, 3rd edition, 1984, pp. 750–752. Further reading • Andreani, R.; Martínez, J. M.; Schuverdt, M. L. (2005). "On the relation between constant positive linear dependence condition and quasinormality constraint qualification". Journal of Optimization Theory and Applications. 125 (2): 473–485. doi:10.1007/s10957-004-1861-9. S2CID 122212394. • Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover. ISBN 0-486-43227-0. • Boltyanski, V.; Martini, H.; Soltan, V. (1998). "The Kuhn–Tucker Theorem". Geometric Methods and Optimization Problems. New York: Springer. pp. 78–92. ISBN 0-7923-5454-0. • Boyd, S.; Vandenberghe, L. (2004). "Optimality Conditions" (PDF). Convex Optimization. Cambridge University Press. pp. 241–249. ISBN 0-521-83378-7. • Kemp, Murray C.; Kimura, Yoshio (1978). Introduction to Mathematical Economics. New York: Springer. pp. 38–73. ISBN 0-387-90304-6. • Rau, Nicholas (1981). "Lagrange Multipliers". Matrices and Mathematical Programming. London: Macmillan. pp. 156–174. ISBN 0-333-27768-6. • Nocedal, J.; Wright, S. J. (2006). Numerical Optimization. New York: Springer. ISBN 978-0-387-30303-1. • Sundaram, Rangarajan K. (1996). "Inequality Constraints and the Theorem of Kuhn and Tucker". A First Course in Optimization Theory. New York: Cambridge University Press. pp. 145–171. ISBN 0-521-49770-1. External links • Karush–Kuhn–Tucker conditions with derivation and examples • Examples and Tutorials on the KKT Conditions
Wikipedia
Saddle tower In differential geometry, a saddle tower is a minimal surface family generalizing the singly periodic Scherk's second surface so that it has N-fold (N > 2) symmetry around one axis.[1][2] These surfaces are the only properly embedded singly periodic minimal surfaces in R3 with genus zero and finitely many Scherk-type ends in the quotient.[3] References 1. H. Karcher, Embedded minimal surfaces derived from Scherk's examples, Manuscripta Math. 62 (1988) pp. 83–114. 2. H. Karcher, Construction of minimal surfaces, in "Surveys in Geometry", Univ. of Tokyo, 1989, and Lecture Notes No. 12, SFB 256, Bonn, 1989, pp. 1–96. 3. Joaquın Perez and Martin Traize, The classification of singly periodic minimal surfaces with genus zero and Scherk-type ends, Transactions of the American Mathematical Society, Volume 359, Number 3, March 2007, Pages 965–990 External links • Images of The Saddle Tower Surface Families Minimal surfaces • Associate family • Bour's • Catalan's • Catenoid • Chen–Gackstatter • Costa's • Enneper • Gyroid • Helicoid • Henneberg • k-noid • Lidinoid • Neovius • Richmond • Riemann's • Saddle tower • Scherk • Schwarz • Triply periodic
Wikipedia
Saddlepoint approximation method The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics. It provides a highly accurate approximation formula for any PDF or probability mass function of a distribution, based on the moment generating function. There is also a formula for the CDF of the distribution, proposed by Lugannani and Rice (1980). Definition If the moment generating function of a distribution is written as $M(t)$ and the cumulant generating function as $K(t)=\log(M(t))$ then the saddlepoint approximation to the PDF of a distribution is defined as: ${\hat {f}}(x)={\frac {1}{\sqrt {2\pi K''({\hat {s}})}}}\exp(K({\hat {s}})-{\hat {s}}x)$ and the saddlepoint approximation to the CDF is defined as: ${\hat {F}}(x)={\begin{cases}\Phi ({\hat {w}})+\phi ({\hat {w}})({\frac {1}{\hat {w}}}-{\frac {1}{\hat {u}}})&{\text{for }}x\neq \mu \\{\frac {1}{2}}+{\frac {K'''(0)}{6{\sqrt {2\pi }}K''(0)^{3/2}}}&{\text{for }}x=\mu \end{cases}}$ where ${\hat {s}}$ is the solution to $K'({\hat {s}})=x$, ${\hat {w}}=\operatorname {sgn} {\hat {s}}{\sqrt {2({\hat {s}}x-K({\hat {s}}))}}$ and ${\hat {u}}={\hat {s}}{\sqrt {K''({\hat {s}})}}$. When the distribution is that of a sample mean, Lugannani and Rice's saddlepoint expansion for the cumulative distribution function $F(x)$ may be differentiated to obtain Daniels' saddlepoint expansion for the probability density function $f(x)$ (Routledge and Tsao, 1997). This result establishes the derivative of a truncated Lugannani and Rice series as an alternative asymptotic approximation for the density function $f(x)$. Unlike the original saddlepoint approximation for $f(x)$, this alternative approximation in general does not need to be renormalized. References • Butler, Ronald W. (2007), Saddlepoint approximations with applications, Cambridge: Cambridge University Press, ISBN 9780521872508 • Daniels, H. E. (1954), "Saddlepoint Approximations in Statistics", The Annals of Mathematical Statistics, 25 (4): 631–650, doi:10.1214/aoms/1177728652 • Daniels, H. E. (1980), "Exact Saddlepoint Approximations", Biometrika, 67 (1): 59–63, doi:10.1093/biomet/67.1.59, JSTOR 2335316 • Lugannani, R.; Rice, S. (1980), "Saddle Point Approximation for the Distribution of the Sum of Independent Random Variables", Advances in Applied Probability, 12 (2): 475–490, doi:10.2307/1426607, JSTOR 1426607, S2CID 124484743 • Reid, N. (1988), "Saddlepoint Methods and Statistical Inference", Statistical Science, 3 (2): 213–227, doi:10.1214/ss/1177012906 • Routledge, R. D.; Tsao, M. (1997), "On the relationship between two asymptotic expansions for the distribution of sample mean and its applications", Annals of Statistics, 25 (5): 2200–2209, doi:10.1214/aos/1069362394
Wikipedia
Sadleirian Professor of Pure Mathematics The Sadleirian Professorship of Pure Mathematics, originally spelled in the statutes and for the first two professors as Sadlerian,[1] is a professorship in pure mathematics within the DPMMS at the University of Cambridge. It was founded on a bequest from Lady Mary Sadleir for lectureships "for the full and clear explication and teaching that part of mathematical knowledge commonly called algebra". She died in 1706 and lectures began in 1710 but eventually these failed to attract undergraduates. In 1860 the foundation was used to establish the professorship.[2][3] On 10 June 1863 Arthur Cayley was elected with the statutory duty "to explain and teach the principles of pure mathematics, and to apply himself to the advancement of that science." The stipend attached to the professorship was modest although it improved in the course of subsequent legislation. List of Sadlerian Lecturers of Pure Mathematics • 1746–1769 William Ludlam • 1826–1835 Lawrence Stephenson[4] List of Sadleirian Lecturers of Pure Mathematics • 1845–1847 Arthur Scratchley[5] • 1847–1857 George Ferns Reyner[6] • 1851 Stephen Hanson[7] • 1855–1858 William Charles Green[8] • 1857–1864 John Robert Lunn List of Sadleirian Professors of Pure Mathematics • 1863–1895 Arthur Cayley • 1895–1910 Andrew Russell Forsyth • 1910–1931 E. W. Hobson • 1931–1942 G. H. Hardy • 1945–1953 Louis Mordell • 1953–1967 Philip Hall • 1967–1986 J. W. S. Cassels • 1986–2012 John H. Coates • 2013–2014 Vladimir Markovic • 2017–2021 Emmanuel Breuillard References 1. Encyclopædia Britannica, 15th edition 2. Piaggio, H. T. H. (1931). "Three Sadleirian Professors: A. R. Forsyth, E. W. Hobson and G. H. Hardy". The Mathematical Gazette. 15 (215): 461–465. doi:10.2307/3606220. JSTOR 3606220. S2CID 187727124. 3. Piaggio, H. "Three Sadleirian Professors: A.R. Forsyth, E.W. Hobson and G.H. Hardy". MacTutor History of Mathematics archive. St. Andrews University. Retrieved 12 July 2015. 4. "Searching for Surname=STEPHENSON; Forename=lawrence". A Cambridge Alumni Database. 5. "Searching for Surname=SCRATCHLEY; Forename=arthur". A Cambridge Alumni Database. 6. "Searching for Surname=REYNER; Forename=george". A Cambridge Alumni Database. 7. "A Cambridge Alumni Database". A Cambridge Alumni Database. 8. "Searching for Surname=GREEN; Forename=william charles". A Cambridge Alumni Database. Sources • Obituary Notices of Fellows Deceased. (1895). Proceedings of the Royal Society of London, 58, I-Lx. Retrieved from https://www.jstor.org/stable/115800 (Obituary of Arthur Cayley written by Andrew Forsyth). • University of Cambridge DPMMS https://web.archive.org/web/20160624155328/http://www.admin.cam.ac.uk/offices/academic/secretary/professorships/sadleirian.pdf
Wikipedia
The Assayer The Assayer (Italian: Il Saggiatore) was a book published in Rome by Galileo Galilei in October 1623 and is generally considered to be one of the pioneering works of the scientific method, first broaching the idea that the book of nature is to be read with mathematical tools rather than those of scholastic philosophy, as generally held at the time. Galileo vs. Grassi on comets Main article: Galileo Galilei § Controversy over comets and The Assayer In 1619, Galileo became embroiled in a controversy with Father Orazio Grassi, professor of mathematics at the Jesuit Collegio Romano. It began as a dispute over the nature of comets, but by the time Galileo had published The Assayer, his last salvo in the dispute, it had become a much wider controversy over the very nature of science itself. An Astronomical Disputation The debate between Galileo and Grassi started in early 1619, when Father Grassi anonymously published the pamphlet, An Astronomical Disputation on the Three Comets of the Year 1618 (Disputatio astronomica de tribus cometis anni MDCXVIII),[1] which discussed the nature of a comet that had appeared late in November of the previous year. Grassi concluded that the comet was a fiery, celestial body that had moved along a segment of a great circle at a constant distance from the earth,[2][3] and since it moved in the sky more slowly than the Moon, it must be farther away than the Moon. Tychonic system Grassi adopted Tycho Brahe's Tychonic system, in which the other planets of the Solar System orbit around the Sun, which, in turn, orbits around the Earth. In his Disputatio Grassi referenced many of Galileo's observations, such as the surface of the Moon and the phases of Venus, without mentioning him. Grassi argued from the apparent absence of observable parallax that comets move beyond the Moon. Galileo never explicitly stated that comets are an illusion, but merely wondered if they are real or an optical illusion. Discourse on Comets Grassi's arguments and conclusions were criticised in a subsequent pamphlet, Discourse on Comets,[4] published under the name of one of Galileo's disciples, a Florentine lawyer named Mario Guiducci, although it had been largely written by Galileo himself.[5] Galileo and Guiducci offered no definitive theory of their own on the nature of comets,[6][7] although they did present some tentative conjectures that are now known to be mistaken. (The correct approach to the study of comets had been proposed at the time by Tycho Brahe.) In its opening passage, Galileo and Guiducci's Discourse gratuitously insulted the Jesuit Christoph Scheiner,[8][9][10] and various uncomplimentary remarks about the professors of the Collegio Romano were scattered throughout the work.[8] The Astronomical and Philosophical Balance The Jesuits were offended,[7][8] and Grassi soon replied with a polemical tract of his own, The Astronomical and Philosophical Balance (Libra astronomica ac philosophica),[11] under the pseudonym Lothario Sarsio Sigensano, purporting to be one of his own pupils. Science, mathematics, and philosophy In 1616 Galileo may have been silenced on Copernicanism. In 1623 his supporter and friend, Cardinal Maffeo Barberini, a former patron of the Accademia dei Lincei and uncle of future Cardinal Francesco Barberini, became Pope Urban VIII. The election of Barberini seemed to assure Galileo of support at the highest level in the Church. A visit to Rome confirmed this. The Assayer is a milestone in the history of science: here Galileo describes the scientific method, which was quite a revolution at the time. The title page of The Assayer shows the crest of the Barberini family, featuring three busy bees. In The Assayer, Galileo weighs the astronomical views of a Jesuit, Orazio Grassi, and finds them wanting. The book was dedicated to the new pope. The title page also shows that Urban VIII employed a member of the Lynx, Cesarini, at a high level in the papal service. This book was edited and published by members of the Lynx. In The Assayer Galileo mainly criticized Grassi's method of inquiry, heavily biased by his religious belief and based on ipse dixit, rather than his hypothesis on comets. Furthermore, he insisted that natural philosophy (i.e. physics) should be mathematical. According to the title page, he was the philosopher (i.e. physicist) of the Grand Duke of Tuscany, not merely the mathematician. Natural philosophy (physics) spans the gamut from processes of generation and growth (represented by a plant) to the physical structure of the universe, represented by the cosmic cross-section. Mathematics, on the other hand, is symbolized by telescopes, and an astrolabe. The language of science The Assayer contains Galileo’s famous statement that mathematics is the language of science. Only through mathematics can one achieve lasting truth in physics. Those who neglect mathematics wander endlessly in a dark labyrinth. From the book:[12] Philosophy [i.e. natural philosophy] is written in this grand book — I mean the Universe — which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth. Galileo used a sarcastic and witty tone throughout the essay. The book was read with delight at the dinner table by Urban VIII.[13] In 1620 Maffeo Barberini wrote a poem entitled Adulatio Perniciosa in Galileo's honor.[14] An official, Giovanni di Guevara, said that The Assayer was free from any unorthodoxy.[15] Also in the book, Galileo theorizes that senses such as smell and taste are made possible by the release of tiny particles from their host substances, which was correct but not proven until later.[16][17] And those minute particles which rise up may enter by our nostrils and strike upon some small protuberances which are the instrument of smelling; here likewise their touch and passage is received to our like or dislike according as they have this or that shape, are fast or slow, and are numerous or few. (p 276). The rubbing together and friction of two hard bodies, either by resolving their parts into very subtle flying particles or by opening an exit for the tiny fire-corpuscles within, ultimately sets these in motion; and when they meet our bodies and penetrate them, our conscious mind feels those pleasant or unpleasant sensations which we have named heat... (p. 278). See also • Book of Nature References 1. Grassi, H. (Horatio in the source publication). (1960) [1619]. "On the three comets of the year 1618". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. 3–19.{{cite book}}: CS1 maint: multiple names: editors list (link) 2. Drake, S. (1978). Galileo at work: His scientific biography. University of Chicago Press. p. 268. 3. Grassi, H. (1960) [1619]. "On the three comets of the year 1618". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. 3–19 (specifically p. 16).{{cite book}}: CS1 maint: multiple names: editors list (link) 4. Guiducci, M. (1960) [1619]. "Discourse on the comets". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. 21–65.{{cite book}}: CS1 maint: multiple names: editors list (link) 5. Drake, S. (1960). "Introduction". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. vii–xxv (specifically p. xvi).{{cite book}}: CS1 maint: multiple names: editors list (link) 6. Drake, S. (1957). "Introduction: Fourth part". In Drake, S. (ed.). Discoveries and opinions of Galileo (PDF). Doubleday Anchor Books. p. 222. 7. Drake, S. (1960). "Introduction". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. vii–xxv (specifically p. xvii).{{cite book}}: CS1 maint: multiple names: editors list (link) 8. Sharratt, M. (1994). Galileo: Decisive innovator. Cambridge University Press. p. 135. 9. Drake, S. (1960). "Introduction". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. vii–xxv (specifically p. xii).{{cite book}}: CS1 maint: multiple names: editors list (link) 10. Guiducci, M. (1960) [1619]. "Discourse on the comets". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. 21–65 (specifically p. 24).{{cite book}}: CS1 maint: multiple names: editors list (link) 11. Grassi, H. (1960) [1619]. "The astronomical and philosophical balance". In Drake, S., & O'Malley, C. D. (ed.). The controversy on the comets of 1618. University of Pennsylvania Press. pp. 67–132.{{cite book}}: CS1 maint: multiple names: editors list (link) 12. Galileo Galilei, The Assayer, as translated by Stillman Drake (1957), Discoveries and Opinions of Galileo pp. 237-8. 13. Amir Alexander (2014). Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. Scientific American / Farrar, Straus and Giroux. ISBN 978-0374176815., p. 131 14. The Galileo Project 15. William A. Wallace, Galileo, the Jesuits and the Medieval Aristotle, (1991), pp.VII, 81-83 16. Fermi, Laura (1961). The Story of Atomic Energy. New York: Random House. p. 6. 17. Galilei, G. (1957) [1623]. "The Assayer". In Drake, S. (ed.). Discoveries and Opinions of Galileo (PDF). Doubleday. p. 276. Sources • Galileo Galilei, Il Saggiatore (in Italian) (Rome, 1623); The Assayer, English trans. Stillman Drake and C. D. O'Malley, in The Controversy on the Comets of 1618 (University of Pennsylvania Press, 1960). • Pietro Redondi, Galileo eretico, 1983; Galileo: Heretic (transl: Raymond Rosenthal) Princeton University Press 1987 (reprint 1989 ISBN 0-691-02426-X); Penguin 1988 (reprint 1990 ISBN 0-14-012541-8) External links • PDF version of the abridged text of The Assayer - Stanford University • Galileo, Selections from The Assayer - Princeton University Galileo Galilei Scientific career • Observational astronomy • Galileo affair • Galileo's escapement • Galilean invariance • Galilean moons • Galilean transformation • Leaning Tower of Pisa experiment • Phases of Venus • Celatone • Thermoscope Works • De Motu Antiquiora (1589-1592) • Sidereus Nuncius (1610) • Letters on Sunspots (1613) • Letter to Benedetto Castelli (1613) • "Letter to the Grand Duchess Christina" (1615) • "Discourse on the Tides" (1616) • Discourse on Comets (1619) • The Assayer (1623) • Dialogue Concerning the Two Chief World Systems (1632) • Two New Sciences (1638) Family • Vincenzo Galilei (father) • Michelagnolo Galilei (brother) • Vincenzo Gamba (son) • Maria Celeste (daughter) • Marina Gamba (mistress) Related • "And yet it moves" • Villa Il Gioiello • Galileo's paradox • Sector • Museo Galileo • Galileo's telescopes • Galileo's objective lens • Tribune of Galileo • Galileo thermometer • Galileo project • spacecraft • Galileo Galilei Airport • Galileo National Telescope • Astronomers Monument In popular culture • Life of Galileo (1943 play) • Lamp At Midnight (1947 play) • Galileo (1968 film) • Galileo (1975 film) • Starry Messenger (1996 book) • Galileo's Daughter: A Historical Memoir of Science, Faith, and Love (1999 book) • Galileo Galilei (2002 opera) • Galileo's Dream (2009 novel)
Wikipedia
Sagitta (geometry) In geometry, the sagitta (sometimes abbreviated as sag[1]) of a circular arc is the distance from the center of the arc to the center of its base.[2] It is used extensively in architecture when calculating the arc necessary to span a certain height and distance and also in optics where it is used to find the depth of a spherical mirror or lens. The name comes directly from Latin sagitta, meaning an arrow. Look up sagitta in Wiktionary, the free dictionary. Formulas In the following equations, $s$ denotes the sagitta (the depth or height of the arc), $r$ equals the radius of the circle, and $l$ the length of the chord spanning the base of the arc. As ${\tfrac {1}{2}}l$ and $r-s$ are two sides of a right triangle with $r$ as the hypotenuse, the Pythagorean theorem gives us $r^{2}=\left({\tfrac {1}{2}}l\right)^{2}+\left(r-s\right)^{2}.$ This may be rearranged to give any of the other three: ${\begin{aligned}s&=r-{\sqrt {r^{2}-{\tfrac {1}{4}}l^{2}}},\\[10mu]l&=2{\sqrt {2rs-s^{2}}},\\[5px]r&={\frac {s^{2}+{\tfrac {1}{4}}l^{2}}{2s}}={\frac {s}{2}}+{\frac {l^{2}}{8s}}.\end{aligned}}$ The sagitta may also be calculated from the versine function, for an arc that spans an angle of Δ = 2θ, and coincides with the versine for unit circles $s=r\operatorname {versin} \theta =r\left(1-\cos \theta \right)=2r\sin ^{2}{\frac {\theta }{2}}.$ Approximation When the sagitta is small in comparison to the radius, it may be approximated by the formula[2] $s\approx {\frac {l^{2}}{8r}}.$ Alternatively, if the sagitta is small and the sagitta, radius, and chord length are known, they may be used to estimate the arc length by the formula $a\approx l+{\frac {2s^{2}}{r}}\approx l+{\frac {8s^{2}}{3l}},$ where a is the length of the arc; this formula was known to the Chinese mathematician Shen Kuo, and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing.[3] Applications Architects, engineers, and contractors use these equations to create "flattened" arcs that are used in curved walls, arched ceilings, bridges, and numerous other applications. The sagitta also has uses in physics where it is used, along with chord length, to calculate the radius of curvature of an accelerated particle. This is used especially in bubble chamber experiments where it is used to determine the momenta of decay particles. Likewise historically the sagitta is also utilised as a parameter in the calculation of moving bodies in a centripetal system. This method is utilised in Newton's Principia. See also • Circular segment • Versine References 1. Shaneyfelt, Ted V. "德博士的 Notes About Circles, ज्य, & कोज्य: What in the world is a hacovercosine?". Hilo, Hawaii: University of Hawaii. Archived from the original on 2015-09-19. Retrieved 2015-11-08. 2. Woodward, Ernest (December 1978). Geometry - Plane, Solid & Analytic Problem Solver. Problem Solvers Solution Guides. Research & Education Association (REA). p. 359. ISBN 978-0-87891-510-1. 3. Needham, Noel Joseph Terence Montgomery (1959). Science and Civilisation in China: Mathematics and the Sciences of the Heavens and the Earth. Vol. 3. Cambridge University Press. p. 39. ISBN 9780521058018. External links • Calculating the Sagitta of an Arc
Wikipedia
Sahlqvist formula In modal logic, Sahlqvist formulas are a certain kind of modal formula with remarkable properties. The Sahlqvist correspondence theorem states that every Sahlqvist formula is canonical, and corresponds to a first-order definable class of Kripke frames. Sahlqvist's definition characterizes a decidable set of modal formulas with first-order correspondents. Since it is undecidable, by Chagrova's theorem, whether an arbitrary modal formula has a first-order correspondent, there are formulas with first-order frame conditions that are not Sahlqvist [Chagrova 1991] (see the examples below). Hence Sahlqvist formulas define only a (decidable) subset of modal formulas with first-order correspondents. Definition Sahlqvist formulas are built up from implications, where the consequent is positive and the antecedent is of a restricted form. • A boxed atom is a propositional atom preceded by a number (possibly 0) of boxes, i.e. a formula of the form $\Box \cdots \Box p$ (often abbreviated as $\Box ^{i}p$ for $0\leq i<\omega $). • A Sahlqvist antecedent is a formula constructed using ∧, ∨, and $\Diamond $ from boxed atoms, and negative formulas (including the constants ⊥, ⊤). • A Sahlqvist implication is a formula A → B, where A is a Sahlqvist antecedent, and B is a positive formula. • A Sahlqvist formula is constructed from Sahlqvist implications using ∧ and $\Box $ (unrestricted), and using ∨ on formulas with no common variables. Examples of Sahlqvist formulas $p\rightarrow \Diamond p$ Its first-order corresponding formula is $\forall x\;Rxx$, and it defines all reflexive frames $p\rightarrow \Box \Diamond p$ Its first-order corresponding formula is $\forall x\forall y[Rxy\rightarrow Ryx]$, and it defines all symmetric frames $\Diamond \Diamond p\rightarrow \Diamond p$ or $\Box p\rightarrow \Box \Box p$ Its first-order corresponding formula is $\forall x\forall y\forall z[(Rxy\land Ryz)\rightarrow Rxz]$, and it defines all transitive frames $\Diamond p\rightarrow \Diamond \Diamond p$ or $\Box \Box p\rightarrow \Box p$ Its first-order corresponding formula is $\forall x\forall y[Rxy\rightarrow \exists z(Rxz\land Rzy)]$, and it defines all dense frames $\Box p\rightarrow \Diamond p$ Its first-order corresponding formula is $\forall x\exists y\;Rxy$, and it defines all right-unbounded frames (also called serial) $\Diamond \Box p\rightarrow \Box \Diamond p$ Its first-order corresponding formula is $\forall x\forall x_{1}\forall z_{0}[Rxx_{1}\land Rxz_{0}\rightarrow \exists z_{1}(Rx_{1}z_{1}\land Rz_{0}z_{1})]$, and it is the Church-Rosser property. Examples of non-Sahlqvist formulas $\Box \Diamond p\rightarrow \Diamond \Box p$ This is the McKinsey formula; it does not have a first-order frame condition. $\Box (\Box p\rightarrow p)\rightarrow \Box p$ The Löb axiom is not Sahlqvist; again, it does not have a first-order frame condition. $(\Box \Diamond p\rightarrow \Diamond \Box p)\land (\Diamond \Diamond q\rightarrow \Diamond q)$ The conjunction of the McKinsey formula and the (4) axiom has a first-order frame condition (the conjunction of the transitivity property with the property $\forall x[\forall y(Rxy\rightarrow \exists z[Ryz])\rightarrow \exists y(Rxy\wedge \forall z[Ryz\rightarrow z=y])]$) but is not equivalent to any Sahlqvist formula. Kracht's theorem When a Sahlqvist formula is used as an axiom in a normal modal logic, the logic is guaranteed to be complete with respect to the elementary class of frames the axiom defines. This result comes from the Sahlqvist completeness theorem [Modal Logic, Blackburn et al., Theorem 4.42]. But there is also a converse theorem, namely a theorem that states which first-order conditions are the correspondents of Sahlqvist formulas. Kracht's theorem states that any Sahlqvist formula locally corresponds to a Kracht formula; and conversely, every Kracht formula is a local first-order correspondent of some Sahlqvist formula which can be effectively obtained from the Kracht formula [Modal Logic, Blackburn et al., Theorem 3.59]. References • L. A. Chagrova, 1991. An undecidable problem in correspondence theory. Journal of Symbolic Logic 56:1261–1272. • Marcus Kracht, 1993. How completeness and correspondence theory got married. In de Rijke, editor, Diamonds and Defaults, pages 175–214. Kluwer. • Henrik Sahlqvist, 1975. Correspondence and completeness in the first- and second-order semantics for modal logic. In Proceedings of the Third Scandinavian Logic Symposium. North-Holland, Amsterdam.
Wikipedia
Saint-Venant's compatibility condition In the mathematical theory of elasticity, Saint-Venant's compatibility condition defines the relationship between the strain $\varepsilon $ and a displacement field $\ u$ by $\epsilon _{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right)$ where $1\leq i,j\leq 3$. Barré de Saint-Venant derived the compatibility condition for an arbitrary symmetric second rank tensor field to be of this form, this has now been generalized to higher rank symmetric tensor fields on spaces of dimension $n\geq 2$ Rank 2 tensor fields For a symmetric rank 2 tensor field $F$ in n-dimensional Euclidean space ($n\geq 2$) the integrability condition takes the form of the vanishing of the Saint-Venant's tensor $W(F)$ [1] defined by $W_{ijkl}={\frac {\partial ^{2}F_{ij}}{\partial x_{k}\partial x_{l}}}+{\frac {\partial ^{2}F_{kl}}{\partial x_{i}\partial x_{j}}}-{\frac {\partial ^{2}F_{il}}{\partial x_{j}\partial x_{k}}}-{\frac {\partial ^{2}F_{jk}}{\partial x_{i}\partial x_{l}}}$ The result that, on a simply connected domain W=0 implies that strain is the symmetric derivative of some vector field, was first described by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.[2] For non-simply connected domains there are finite dimensional spaces of symmetric tensors with vanishing Saint-Venant's tensor that are not the symmetric derivative of a vector field. The situation is analogous to de Rham cohomology[3] The Saint-Venant tensor $W$ is closely related to the Riemann curvature tensor $R_{ijkl}$. Indeed the first variation $R$ about the Euclidean metric with a perturbation in the metric $F$ is precisely $W$.[4] Consequently the number of independent components of $W$ is the same as $R$[5] specifically ${\frac {n^{2}(n^{2}-1)}{12}}$ for dimension n.[6] Specifically for $n=2$, $W$ has only one independent component where as for $n=3$ there are six. In its simplest form of course the components of $F$ must be assumed twice continuously differentiable, but more recent work[2] proves the result in a much more general case. The relation between Saint-Venant's compatibility condition and Poincaré's lemma can be understood more clearly using a reduced form of $W$ the Kröner tensor [5] $K_{i_{1}...i_{n-2}j_{1}...j_{n-2}}=\epsilon _{i_{1}...i_{n-2}kl}\epsilon _{j_{1}...j_{n-2}mp}F_{lm,kp}$ where $\epsilon $ is the permutation symbol. For $n=3$, $K$is a symmetric rank 2 tensor field. The vanishing of $K$ is equivalent to the vanishing of $W$ and this also shows that there are six independent components for the important case of three dimensions. While this still involves two derivatives rather than the one in the Poincaré lemma, it is possible to reduce to a problem involving first derivatives by introducing more variables and it has been shown that the resulting 'elasticity complex' is equivalent to the de Rham complex.[7] In differential geometry the symmetrized derivative of a vector field appears also as the Lie derivative of the metric tensor g with respect to the vector field. $T_{ij}=({\mathcal {L}}_{U}g)_{ij}=U_{i;j}+U_{j;i}$ where indices following a semicolon indicate covariant differentiation. The vanishing of $W(T)$ is thus the integrability condition for local existence of $U$ in the Euclidean case. As noted above this coincides with the vanishing of the linearization of the Riemann curvature tensor about the Euclidean metric. Generalization to higher rank tensors Saint-Venant's compatibility condition can be thought of as an analogue, for symmetric tensor fields, of Poincaré's lemma for skew-symmetric tensor fields (differential forms). The result can be generalized to higher rank symmetric tensor fields.[8] Let F be a symmetric rank-k tensor field on an open set in n-dimensional Euclidean space, then the symmetric derivative is the rank k+1 tensor field defined by $(dF)_{i_{1}...i_{k}i_{k+1}}=F_{(i_{1}...i_{k},i_{k+1})}$ where we use the classical notation that indices following a comma indicate differentiation and groups of indices enclosed in brackets indicate symmetrization over those indices. The Saint-Venant tensor $W$ of a symmetric rank-k tensor field $T$ is defined by $W_{i_{1}..i_{k}j_{1}...j_{k}}=V_{(i_{1}..i_{k})(j_{1}...j_{k})}$ with $V_{i_{1}..i_{k}j_{1}...j_{k}}=\sum \limits _{p=0}^{k}(-1)^{p}{k \choose p}T_{i_{1}..i_{k-p}j_{1}...j_{p},j_{p+1}...j_{k}i_{k-p+1}...i_{k}}$ On a simply connected domain in Euclidean space $W=0$ implies that $T=dF$ for some rank k-1 symmetric tensor field $F$. References 1. N.I. Muskhelishvili, Some Basic Problems of the Mathematical Theory of Elasticity. Leyden: Noordhoff Intern. Publ., 1975. 2. C Amrouche, PG Ciarlet, L Gratie, S Kesavan, On Saint Venant's compatibility conditions and Poincaré's lemma, C. R. Acad. Sci. Paris, Ser. I, 342 (2006), 887-891. doi:10.1016/j.crma.2006.03.026 3. Giuseppe Geymonat, Francoise Krasucki, Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains,COMMUNICATIONS ON PURE AND APPLIED ANALYSIS, Volume 8, Number 1, January 2009, pp. 295–309 doi:10.3934/cpaa.2009.8.295 4. Philippe G. Ciarlet , Cristinel Mardare , Ming Shen, Recovery of a displacement field from its linearized strain tensor field in curvilinear coordinates, C. R. Acad. Sci. Paris, Ser. I 344 (2007) 535–540 5. D. V. Georgiyecskii and B. Ye. Pobedrya,The number of independent compatibility equations in the mechanics of deformable solids, Journal of Applied Mathematicsand Mechanics,68 (2004)941-946 6. Weisstein, Eric W. Riemann Tensor. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/RiemannTensor.html 7. M Eastwood, A complex from linear elasticity, Rendiconti del circolo mathematico di Palermo, Ser II Suppl 63 (2000), pp23-29 8. V.A. Sharafutdinov, Integral Geometry of Tensor Fields, VSP 1994,ISBN 90-6764-165-0. Chapter 2.on-line version See also • Compatibility (mechanics)
Wikipedia
Sainte-Laguë method The Webster method, also called the Sainte-Laguë method (French pronunciation: ​[sɛ̃t.la.ɡy]), is an apportionment method for allocating seats in a parliament among federal states, or among parties in a party-list proportional representation system. Part of the Politics series Electoral systems Single-winner/majoritarian Plurality • First-past-the-post • Plurality at-large (plurality block voting) • General ticket (party block voting) Multi-round voting • Two-round • Exhaustive ballot • Primary election • Nonpartisan • unified • top-four • Majority at-large (two-round block voting) Ranked / preferential systems • Instant-runoff (alternative vote) • Contingent vote • Coombs' method • Condorcet methods (Copeland's, Dodgson's, Kemeny–Young, Minimax, Nanson's, ranked pairs, Schulze, Alternative Smith) • Positional voting (Borda count, Nauru/Dowdall method) • Bucklin voting • Oklahoma primary electoral system • Preferential block voting Cardinal / graded systems • Score voting • Approval voting • Combined approval voting • Unified primary • Usual judgment • Satisfaction approval voting • Majority judgment • STAR voting Proportional representation Party-list • Electoral list • open list • closed list • local lists • Apportionment • Sainte-Laguë • D'Hondt • Huntington–Hill • Hare • Droop • Imperiali • Hagenbach-Bischoff • National remnant • Highest averages • Largest remainder Proportional forms of ranked voting • Single transferable vote • Gregory • Wright • Schulze STV • CPO-STV • Ranked party list PR • Spare vote Proportional forms of cardinal voting • Proportional approval voting • Sequential proportional approval voting • Method of Equal Shares • Phragmen's voting rules Biproportional apportionment Fair majority voting Weighted voting • Direct representation • Interactive representation • Liquid democracy Mixed systems By type of representation • Mixed-member majoritarian • Mixed-member proportional Non-compensatory mixed systems • Parallel voting • Majority bonus Compensatory mixed systems • Additional member system • Mixed single vote (positive vote transfer) • Scorporo (negative vote transfer) • Mixed ballot transferable vote • Alternative Vote Plus • Dual-member proportional • Rural–urban proportional Other systems and related theory Semi-proportional representation • Single non-transferable vote • Limited voting • Cumulative voting • Binomial voting Other systems • Multiple non-transferable vote • Double simultaneous vote • Proxy voting • Delegated voting • Indirect STV • Liquid democracy • Random selection (sortition, random ballot) Social choice theory • Arrow's theorem • Gibbard–Satterthwaite theorem • Public choice theory List of electoral systems • List of electoral systems by country • Comparison of electoral systems  Politics portal The method was first described in 1832 by the American statesman and senator Daniel Webster. In 1842 the method was adopted for proportional allocation of seats in United States congressional apportionment (Act of 25 June 1842, ch 46, 5 Stat. 491). The same method was independently invented in 1910 by the French mathematician André Sainte-Laguë. It seems that French and European literature was unaware of Webster until after World War II. This is the reason for the double name. Motivation Proportional electoral systems attempt to distribute seats in proportion to the votes for each political party, i.e. a party with 30% of votes would receive 30% of seats. Exact proportionality is not possible because only whole seats can be distributed. Different apportionment methods, of which the Sainte-Laguë method is one, exist to distribute the seats according to the votes. Different apportionment methods show different levels of proportionality, apportionment paradoxes and political fragmentation. The Sainte-Laguë method minimizes the average largest seats-to-votes ratio[1] and empirically shows the best proportionality behavior[2] and more equal seats-to-votes ratio for different sized parties[3] among apportionment methods. Among other common methods, the D'Hondt method favours large parties and coalitions over small parties.[4][5][6][7] While favoring large parties reduces political fragmentation, this can be achieved with electoral thresholds as well. The Sainte-Laguë method shows less apportionment paradoxes compared to largest remainder methods[8] such as the Hare quota and other highest averages methods such as d'Hondt method.[9] Description After all the votes have been tallied, successive quotients are calculated for each party. The formula for the quotient is[10] ${\text{quotient}}={\frac {V}{2s+1}}$ where: • V is the total number of votes that party received, and • s is the number of seats that have been allocated so far to that party, initially 0 for all parties. Whichever party has the highest quotient gets the next seat allocated, and their quotient is recalculated. The process is repeated until all seats have been allocated. The Webster/Sainte-Laguë method does not ensure that a party receiving more than half the votes will win at least half the seats; nor does its modified form.[11] Often there is an electoral threshold; that is, in order to be allocated seats, a minimum percentage of votes must be gained. Example In this example, 230,000 voters decide the disposition of 8 seats among 4 parties. Since 8 seats are to be allocated, each party's total votes are divided by 1, then by 3, and 5 (and then, if necessary, by 7, 9, 11, 13, and so on by using the formula above) every time the number of votes is the biggest for the current round of calculation. For comparison, the "True proportion" column shows the exact fractional numbers of seats due, calculated in proportion to the number of votes received. (For example, 100,000/230,000 × 8 = 3.48.) round (1 seat per round) 1 2 3 4 5 6 7 Seats won (bold) Party A quotient seats after round 100,000 0+1 33,333 1 33,333 1+1 20,000 2 20,000 2 20,000 2+1 14,286 3 3 Party B quotient seats after round 80,000 0 80,000 0+1 26,667 1 26,667 1 26,667 1+1 16,000 2 16,000 2+1 3 Party C quotient seats after round 30,000 0 30,000 0 30,000 0 30,000 0+1 10,000 1 10,000 1 10,000 1 1 Party D quotient seats after round 20,000 0 20,000 0 20,000 0 20,000 0 20,000 0 20,000 0+1 6,667 1 1 The 8 highest entries (in the current round of calculation) are marked by asterisk: from 100,000 down to 16,000; for each, the corresponding party gets a seat. The below chart is an easy way to perform the calculation: Denominator/1/3/5Seats won (*) True proportion Party A100,000*33,333*20,000*3 3.5 Party B80,000*26,667*16,000*3 2.8 Party C30,000*10,0006,0001 1.0 Party D20,000*6,6674,0001 0.7 Total88 Party Popular vote Party-list PR — Sainte-Laguë method Number of seats Seats % Party A 43.5% 3 37.5% Party B 34.8% 3 37.5% Party C 13.0% 1 12.5% Party D 8.7% 1 12.5% TOTAL 100% 8 100% In comparison, the d'Hondt method would be allocated four seats to party A and no seats to party D, reflecting d'Hondt method's overrepresentation of larger parties.[10] Properties When apportioning seats in proportional representation, it is particularly important to avoid bias between large parties and small parties to avoid strategic voting. André Sainte-Laguë showed theoretically that the Sainte-Laguë method shows the lowest average bias in apportionment,[1] confirmed by different theoretical and empirical ways.[2][12]: Sec.5  The European Parliament (Representation) Act 2003 stipulates each region must be allocated at least 3 seats and that the ratio of electors to seats is as nearly as possible the same for each, the Commission found the Sainte-Laguë method produced the smallest standard deviation when compared to the D'Hondt method and Hare quota.[13][14] Proportionality under Sainte-Laguë method The seats-to-votes ratio $a_{i}$ for a political party $i$ is the ratio between the fraction of seats $s_{i}$ and the fraction of votes $v_{i}$ for that party: $a_{i}={\frac {s_{i}}{v_{i}}}$ The Sainte-Laguë method approximates proportionality by optimizing the seats-to-votes ratio among all parties $i$ with the least squares approach. First, the difference between the seats-to-votes ratio for a party and the ideal seats-to-votes ratio is calculated and squared to obtain the error for the party $i$. To achieve equal representation of each voter, the ideal ratio of seats share to votes share is $1$. $error_{i}=(a_{i}-a_{ideal})^{2}=\left({\frac {s_{i}}{v_{i}}}-1\right)^{2}$ Second, the error for each party is weighted according to the vote share of each party to represent each voter equally. In the last step, the errors for each party are summed up. This error is identical to the Sainte-Laguë Index. $error=\sum _{i}v_{i}*error_{i}=\sum _{i}{v_{i}*\left({\frac {s_{i}}{v_{i}}}-1\right)^{2}}$ It was shown[15] that this error is minimized by the Sainte-Laguë method. Modified Sainte-Laguë method To reduce political fragmentation, some countries, e.g. Nepal, Norway and Sweden, change the quotient formula for parties with one seat (s = 0). These countries changed the quotient from V to V/1.4, though from the general 2018 elections onwards, Sweden has been using V/1.2.[16] That is, the modified method changes the sequence of divisors used in this method from (1, 3, 5, 7, ...) to (1.4, 3, 5, 7, ...). This makes it more difficult for parties to earn only one seat, compared to the unmodified Sainte-Laguë's method. With the modified method, such small parties do not get any seats; these seats are instead given to a larger party.[10] Norway further amends this system by utilizing a two-tier proportionality. The number of members to be returned from each of Norway's 19 constituencies (former counties) depends on the population and area of the county: each inhabitant counts one point, while each km2 counts 1.8 points. Furthermore, one seat from each constituency is allocated according to the national distribution of votes.[17] History Webster proposed the method in the United States Congress in 1832 for proportional allocation of seats in United States congressional apportionment. In 1842 the method was adopted (Act of June 25, 1842, ch 46, 5 Stat. 491). It was then replaced by Hamilton method and in 1911 the Webster method was reintroduced.[12] Webster and Sainte-Laguë methods should be treated as two methods with the same result, because the Webster method is used for allocating seats based on states' population, and the Sainte-Laguë based on parties' votes.[18] Webster invented his method for legislative apportionment (allocating legislative seats to regions based on their share of the population) rather than elections (allocating legislative seats to parties based on their share of the votes) but this makes no difference to the calculations in the method. Webster's method is defined in terms of a quota as in the largest remainder method; in this method, the quota is called a "divisor". For a given value of the divisor, the population count for each region is divided by this divisor and then rounded to give the number of legislators to allocate to that region. In order to make the total number of legislators come out equal to the target number, the divisor is adjusted to make the sum of allocated seats after being rounded give the required total. One way to determine the correct value of the divisor would be to start with a very large divisor, so that no seats are allocated after rounding. Then the divisor may be successively decreased until one seat, two seats, three seats and finally the total number of seats are allocated. The number of allocated seats for a given region increases from s to s + 1 exactly when the divisor equals the population of the region divided by s + 1/2, so at each step the next region to get a seat will be the one with the largest value of this quotient. That means that this successive adjustment method for implementing Webster's method allocates seats in the same order to the same regions as the Sainte-Laguë method would allocate them. In 1980 the German physicist Hans Schepers, at the time Head of the Data Processing Group of the German Bundestag, suggested that the distribution of seats according to d'Hondt be modified to avoid putting smaller parties at a disadvantage.[19] German media started using the term Schepers Method and later German literature usually calls it Sainte-Laguë/Schepers.[19] Threshold for seats An election threshold can be set to reduce political fragmentation, and any list party which does not receive at least a specified percentage of list votes will not be allocated any seats, even if it received enough votes to have otherwise receive a seat. Examples of countries using the Sainte-Laguë method with a threshold are Germany and New Zealand (5%), although the threshold does not apply if a party wins at least one electorate seat in New Zealand or three electorate seats in Germany. Sweden uses a modified Sainte-Laguë method with a 4% threshold, and a 12% threshold in individual constituencies (i.e. a political party can gain representation with a minuscule representation on the national stage, if its vote share in at least one constituency exceeded 12%). Norway has a threshold of 4% to qualify for leveling seats that are allocated according to the national distribution of votes. This means that even though a party is below the threshold of 4% nationally, they can still get seats from constituencies in which they are particularly popular. Usage by country The Webster/Sainte-Laguë method is currently used in Bosnia and Herzegovina, Ecuador, Indonesia,[20] Kosovo, Latvia, Nepal,[21] New Zealand, Norway and Sweden. In Germany it is used on the federal level for the Bundestag, and on the state level for the legislatures of Baden-Württemberg, Bavaria, Bremen, Hamburg, North Rhine-Westphalia, Rhineland-Palatinate, and Schleswig-Holstein. In Denmark it is used for leveling seats in the Folketing, correcting the disproportionality of the D'Hondt method for the other seats.[22] Some cantons in Switzerland use the Sainte-Laguë method for biproportional apportionment between electoral districts and for votes to seats allocation.[23] The Webster/Sainte-Laguë method was used in Bolivia in 1993, in Poland in 2001, and the Palestinian Legislative Council in 2006. The United Kingdom Electoral Commission has used the method from 2003 to 2013 to distribute British seats in the European Parliament to constituent countries of the United Kingdom and the English regions.[24][25] The method has been proposed by the Green Party in Ireland as a reform for use in Dáil Éireann elections,[26] and by the United Kingdom Conservative–Liberal Democrat coalition government in 2011 as the method for calculating the distribution of seats in elections to the House of Lords, the country's upper house of parliament.[27] Comparison to other methods The method belongs to the class of highest-averages methods. It is similar to the Jefferson/D'Hondt method, but uses different divisors. The Jefferson/D'Hondt method favors larger parties while the Webster/Sainte-Laguë method doesn't.[10] The Webster/Sainte-Laguë method is generally seen as more proportional, but risks an outcome where a party with more than half the votes can win fewer than half the seats.[28] When there are two parties, the Webster method is the unique divisor method which is identical to the Hamilton method.[29]: Sub.9.10  See also • Hagenbach-Bischoff quota References 1. Sainte-Laguë, André. "La représentation proportionnelle et la méthode des moindres carrés." Annales scientifiques de l'école Normale Supérieure. Vol. 27. 1910. 2. Pennisi, Aline. "Disproportionality indexes and robustness of proportional allocation methods." Electoral Studies 17.1 (1998): 3-19. 3. Pukelsheim, Friedrich (2007). "Seat bias formulas in proportional representation systems" (PDF). 4th ECPR General Conference. Archived from the original (PDF) on 7 February 2009. 4. Pukelsheim, Friedrich (2007). "Seat bias formulas in proportional representation systems" (PDF). 4th ECPR General Conference. Archived from the original (PDF) on 7 February 2009. 5. Schuster, Karsten; Pukelsheim, Friedrich; Drton, Mathias; Draper, Norman R. (2003). "Seat biases of apportionment methods for proportional representation" (PDF). Electoral Studies. 22 (4): 651–676. doi:10.1016/S0261-3794(02)00027-6. Archived from the original (PDF) on 2016-02-15. Retrieved 2016-02-02. 6. Benoit, Kenneth (2000). "Which Electoral Formula Is the Most Proportional? A New Look with New Evidence" (PDF). Political Analysis. 8 (4): 381–388. doi:10.1093/oxfordjournals.pan.a029822. Archived from the original (PDF) on 2018-07-28. Retrieved 2016-02-11. 7. Lijphart, Arend (1990). "The Political Consequences of Electoral Laws, 1945-85". The American Political Science Review. 84 (2): 481–496. doi:10.2307/1963530. JSTOR 1963530. S2CID 146438586. 8. Balinski, Michel; H. Peyton Young (1982). Fair Representation: Meeting the Ideal of One Man, One Vote. Yale Univ Pr. ISBN 0-300-02724-9. 9. Pukelsheim, Friedrich (2017), Pukelsheim, Friedrich (ed.), "From Reals to Integers: Rounding Functions and Rounding Rules", Proportional Representation: Apportionment Methods and Their Applications, Cham: Springer International Publishing, pp. 59–70, doi:10.1007/978-3-319-64707-4_3, ISBN 978-3-319-64707-4, retrieved 2021-09-01 10. Lijphart, Arend (2003), "Degrees of proportionality of proportional representation formulas", in Grofman, Bernard; Lijphart, Arend (eds.), Electoral Laws and Their Political Consequences, Agathon series on representation, vol. 1, Algora Publishing, pp. 170–179, ISBN 9780875862675 See in particular the section "Sainte-Lague", pp. 174–175. 11. Miller, Nicholas R. (February 2013), "Election inversions under proportional representation", Annual Meeting of the Public Choice Society, New Orleans, March 8-10, 2013 (PDF). 12. Balinski, Michel L.; Peyton, Young (1982). Fair Representation: Meeting the Ideal of One Man, One Vote. 13. "Distribution of UK Members of the European Parliament ahead of the European elections". European Parliament. 2007-06-04. Archived from the original on 2019-07-04. 14. McLean, Iain (1 November 2008). "Don't let the lawyers do the math: Some problems of legislative districting in the UK and the USA". Mathematical and Computer Modelling. 48 (9): 1446–1454. doi:10.1016/j.mcm.2008.05.025. ISSN 0895-7177. 15. Sainte-Laguë, André. "La représentation proportionnelle et la méthode des moindres carrés." Annales scientifiques de l'école Normale Supérieure. Vol. 27. 1910. 16. Holmberg, Kaj (2019), "A new method for optimal proportional representation". Linköping, Sweden: Linköping University Department of Mathematics, p.8. 17. Norway's Ministry of Local Government website; Stortinget; General Elections; The main features of the Norwegian electoral system; accessed 22 August 2009 18. Badie, Bertrand; Berg-Schlosser, Dirk; Morlino, Leonardo, eds. (2011), International Encyclopedia of Political Science, Volume 1, SAGE, p. 754, ISBN 9781412959636, Mathematically, divisor methods for allocating seats to parties on the basis of party vote shares are identical to divisor methods for allocating seats to geographic units on the basis of the unit's share of the total population. ... Similarly, the Sainte-Laguë method is identical to a method devised by the American legislator Daniel Webster. 19. "Sainte-Laguë/Schepers". The Federal Returning Officer of Germany. Retrieved 28 August 2021. 20. "New votes-to-seats system makes elections 'fairer'". The Jakarta Post. 28 May 2018. Retrieved 19 April 2019. 21. Sainte-Laguë method to decide PR seats, Ram Kumar Kamat, 2022 22. "Danish Parliamentary Election Law". 23. Bericht 09.1775.02 der vorberatenden Spezialkommission 24. "Distribution of UK MEPs between electoral regions" (PDF). Electoral Commission. July 2013. Archived (PDF) from the original on 2021-09-04. Retrieved 21 December 2019. 25. "European Parliament (Number of MEPs and Distribution between Electoral Regions) (United Kingdom and Gibraltar) Order 2008 - Hansard". hansard.parliament.uk. 26. "Ireland's Green Party website". Archived from the original on 2011-07-21. Retrieved 2011-02-20. 27. "House of Lords Reform Draft Bill" (PDF). Cabinet Office. May 2011. p. 16. 28. For example with three seats, a 55-25-20 vote is seen to be more proportionally represented by an allocation of 1-1-1 seats than by 2-1-0. 29. Pukelsheim, Friedrich (2017), Pukelsheim, Friedrich (ed.), "Securing System Consistency: Coherence and Paradoxes", Proportional Representation: Apportionment Methods and Their Applications, Cham: Springer International Publishing, pp. 159–183, doi:10.1007/978-3-319-64707-4_9, ISBN 978-3-319-64707-4, retrieved 2021-09-02 External links • Excel Sainte-Laguë calculator • Seats Calculator with the Sainte-Laguë method • Java implementation of Webster's method at cut-the-knot • Elections New Zealand explanation of Sainte-Laguë • Java D'Hondt, Saint-Lague and Hare-Niemeyer calculator Electoral systems Part of the politics and election series Single-winner • Approval voting • Combined approval voting • Unified primary • Borda count • Bucklin voting • Condorcet methods • Copeland's method • Dodgson's method • Kemeny–Young method • Minimax Condorcet method • Nanson's method • Ranked pairs • Schulze method • Exhaustive ballot • First-past-the-post voting • Instant-runoff voting • Coombs' method • Contingent vote • Supplementary vote • Majority judgment • Simple majoritarianism • Plurality • Positional voting system • Score voting • STAR voting • Two-round system • Usual judgment Proportional Systems • Dual member • Mixed-member (Additional member) • Mixed single vote • Party-list • Proportional approval voting • Rural-urban • Sequential proportional approval voting • Single transferable vote • CPO-STV • Hare-Clark • Schulze STV • Spare vote • Indirect single transferable voting Allocation • Highest averages method • Webster/Sainte-Laguë • D'Hondt • Largest remainder method Quotas • Droop quota • Hagenbach-Bischoff quota • Hare quota • Imperiali quota Mixed • Additional member system • Alternative vote plus • Cumulative voting • Limited voting • Mixed single vote • Parallel voting • Satisfaction approval voting • Scorporo • Single non-transferable vote Criteria • Condorcet winner criterion • Condorcet loser criterion • Consistency criterion • Independence of clones • Independence of irrelevant alternatives • Independence of Smith-dominated alternatives • Later-no-harm criterion • Majority criterion • Majority loser criterion • Monotonicity criterion • Mutual majority criterion • Participation criterion • Plurality criterion • Resolvability criterion • Reversal symmetry • Smith criterion • Seats-to-votes ratio Other • Ballot • Election threshold • First-preference votes • Liquid democracy • Spoilt vote • Sortition • Unseating Comparison • Comparison of voting systems • Voting systems by country Portal — Project
Wikipedia
Saito–Kurokawa lift In mathematics, the Saito–Kurokawa lift (or lifting) takes elliptic modular forms to Siegel modular forms of degree 2. The existence of this lifting was conjectured in 1977 independently by Hiroshi Saito and Nobushige Kurokawa (1978). Its existence was almost proved by Maass (1979a, 1979b, 1979c), and Andrianov (1979) and Zagier (1981) completed the proof. Statement The Saito–Kurokawa lift σk takes level 1 modular forms f of weight 2k − 2 to level 1 Siegel modular forms of degree 2 and weight k. The L-functions (when f is a Hecke eigenforms) are related by L(s,σk(f)) = ζ(s − k + 2)ζ(s − k + 1)L(s, f). The Saito–Kurokawa lift can be constructed as the composition of the following three mappings: 1. The Shimura correspondence from level 1 modular forms of weight 2k − 2 to a space of level 4 modular forms of weight k − 1/2 in the Kohnen plus-space. 2. A map from the Kohnen plus-space to the space of Jacobi forms of index 1 and weight k, studied by Eichler and Zagier. 3. A map from the space of Jacobi forms of index 1 and weight k to the Siegel modular forms of degree 2, introduced by Maass. The Saito–Kurokawa lift can be generalized to forms of higher level. The image is the Spezialschar (special band), the space of Siegel modular forms whose Fourier coefficients satisfy $a{\begin{pmatrix}n&t/2\\t/2&m\end{pmatrix}}=\sum _{d\mid t,m,n}d^{k-1}a{\begin{pmatrix}1&t/2d\\t/2d&nm/d^{2}\end{pmatrix}}.$ See also • Doi–Naganuma lifting, a similar lift to Hilbert modular forms. • Ikeda lift, a generalization to Siegel modular forms of higher degree. References • Andrianov, Anatolii N. (1979), "Modular descent and the Saito-Kurokawa conjecture", Invent. Math., 53 (3): 267–280, doi:10.1007/BF01389767, MR 0549402 • Kurokawa, Nobushige (1978), "Examples of eigenvalues of Hecke operators on Siegel cusp forms of degree two", Invent. Math., 49 (2): 149–165, doi:10.1007/bf01403084, MR 0511188 • Maass, Hans (1979a), "Über eine Spezialschar von Modulformen zweiten Grades", Invent. Math., 52 (1): 95–104, doi:10.1007/bf01389857, MR 0532746 • Maass, Hans (1979b), "Über eine Spezialschar von Modulformen zweiten Grades. II", Invent. Math., 53 (3): 249–253, doi:10.1007/bf01389765, MR 0549400 • Maass, Hans (1979c), "Über eine Spezialschar von Modulformen zweiten Grades. III", Invent. Math., 53 (3): 255–265, doi:10.1007/bf01389766, MR 0549401 • Zagier, D. (1981), "Sur la conjecture de Saito-Kurokawa (d'après H. Maass)", Seminar on Number Theory, Paris 1979–80, Progr. Math., vol. 12, Boston, Mass.: Birkhäuser, pp. 371–394, MR 0633910
Wikipedia
Sakura Schafer-Nameki Sakura Schafer-Nameki is a German mathematical physicist working in string theory and supersymmetric gauge theory. She works at the University of Oxford as a Professor of Mathematical Physics in the Mathematical Institute and as a senior research fellow of Wadham College, Oxford.[1] Sakura Schafer-Nameki Alma materUniversity of Stuttgart, University of Cambridge Scientific career InstitutionsUniversity of Oxford ThesisD-Branes in Boundary Field Theory (2003) Doctoral advisorPeter Goddard Websitewww.maths.ox.ac.uk/people/sakura.schafer-nameki Early life and education Although partly of Japanese descent, Schafer-Nameki is originally from Swabia in Germany.[2] She studied both physics and mathematics at the University of Stuttgart from 1995 to 1998. After coming to the University of Cambridge for the Mathematical Tripos, which she passed with distinction in 1999, she remained at Cambridge for doctoral studies.[3] She completed her Ph.D. in 2003; her dissertation, D-Branes in Boundary Field Theory, was supervised by Peter Goddard.[4] Career After completing her doctorate, Schafer-Nameki became a postdoctoral researcher at the University of Hamburg, a Postdoctoral Prize Fellow at the California Institute of Technology, and a senior postdoctoral fellow at the Kavli Institute for Theoretical Physics. She took a position as a lecturer at King's College London in 2010, and was promoted to reader in 2014.[3] In 2016 she moved to Oxford as Professor of Mathematical Physics and Tutorial Fellow of Wadham College,[3][2] becoming a senior research fellow at the college in 2020.[1] Her research combines string theory and geometry.[5] She was the principal investigator for the five-year European Research Council project "Higgs bundles: Supersymmetric Gauge Theories and Geometry" which began in 2016.[6][7] In 2020 she joined the Simons Collaboration on Special Holonomy in Geometry, Analysis, and Physics as one of its Principal Investigators.[5] References 1. "Sakura Schafer-Nameki", Fellows and academic staff, Wadham College, Oxford, retrieved 2022-02-28 2. "Sakura Schafer-Nameki", New Fellows, Wadham Gazette: 160, 2016 – via Issuu.com 3. Curriculum vitae (PDF), retrieved 2020-07-17 4. Sakura Schafer-Nameki at the Mathematics Genealogy Project 5. "Our Team". Simons Collaboration on Special Holonomy in Geometry, Analysis, and Physics, Duke University. Retrieved 2022-05-12. 6. "ERC Funded Projects". European Research Council. Retrieved 2022-05-12. 7. "Higgs Bundles: Supersymmetric Gauge Theories and Geometry". Mathematical Institute, University of Oxford. Retrieved 2022-05-12. External links • Home page • Sakura Schafer-Nameki publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Germany Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Salem number In mathematics, a Salem number is a real algebraic integer α > 1 whose conjugate roots all have absolute value no greater than 1, and at least one of which has absolute value exactly 1. Salem numbers are of interest in Diophantine approximation and harmonic analysis. They are named after Raphaël Salem. Properties Because it has a root of absolute value 1, the minimal polynomial for a Salem number must be reciprocal. This implies that 1/α is also a root, and that all other roots have absolute value exactly one. As a consequence α must be a unit in the ring of algebraic integers, being of norm 1. Every Salem number is a Perron number (a real algebraic number greater than one all of whose conjugates have smaller absolute value). Relation with Pisot–Vijayaraghavan numbers The smallest known Salem number is the largest real root of Lehmer's polynomial (named after Derrick Henry Lehmer) $P(x)=x^{10}+x^{9}-x^{7}-x^{6}-x^{5}-x^{4}-x^{3}+x+1,$ which is about x = 1.17628: it is conjectured that it is indeed the smallest Salem number, and the smallest possible Mahler measure of an irreducible non-cyclotomic polynomial.[1] Lehmer's polynomial is a factor of the shorter 12th-degree polynomial, $Q(x)=x^{12}-x^{7}-x^{6}-x^{5}+1,$ all twelve roots of which satisfy the relation[2] $x^{630}-1={\frac {(x^{315}-1)(x^{210}-1)(x^{126}-1)^{2}(x^{90}-1)(x^{3}-1)^{3}(x^{2}-1)^{5}(x-1)^{3}}{(x^{35}-1)(x^{15}-1)^{2}(x^{14}-1)^{2}(x^{5}-1)^{6}\,x^{68}}}$ Salem numbers can be constructed from Pisot–Vijayaraghavan numbers. To recall, the smallest of the latter is the unique real root of the cubic polynomial, $x^{3}-x-1,$ known as the plastic number and approximately equal to 1.324718. This can be used to generate a family of Salem numbers including the smallest one found so far. The general approach is to take the minimal polynomial P(x) of a Pisot–Vijayaraghavan number and its reciprocal polynomial, P*(x), and solve the equation, $x^{n}P(x)=\pm P^{*}(x)\,$ for integral n above a bound. Subtracting one side from the other, factoring, and disregarding trivial factors will then yield the minimal polynomial of certain Salem numbers. For example, using the negative case of the above, $x^{n}(x^{3}-x-1)=-(x^{3}+x^{2}-1)$ then for n = 8, this factors as, $(x-1)(x^{10}+x^{9}-x^{7}-x^{6}-x^{5}-x^{4}-x^{3}+x+1)=0$ where the decic is Lehmer's polynomial. Using higher n will yield a family with a root approaching the plastic number. This can be better understood by taking nth roots of both sides, $x(x^{3}-x-1)^{1/n}=\pm (x^{3}+x^{2}-1)^{1/n}$ so as n goes higher, x will approach the solution of x3 − x − 1 = 0. If the positive case is used, then x approaches the plastic number from the opposite direction. Using the minimal polynomial of the next smallest Pisot–Vijayaraghavan number gives, $x^{n}(x^{4}-x^{3}-1)=-(x^{4}+x-1)$ which for n = 7 factors as, $(x-1)(x^{10}-x^{6}-x^{5}-x^{4}+1)=0$ a decic not generated in the previous and has the root x = 1.216391... which is the 5th smallest known Salem number. As n → infinity, this family in turn tends towards the larger real root of x4 − x3 − 1 = 0. References 1. Borwein (2002) p.16 2. D. Bailey and D. Broadhurst, A Seventeenth Order Polylogarithm Ladder • Borwein, Peter (2002). Computational Excursions in Analysis and Number Theory. CMS Books in Mathematics. Springer-Verlag. ISBN 0-387-95444-9. Zbl 1020.12001. Chap. 3. • Boyd, David (2001) [1994], "Salem number", Encyclopedia of Mathematics, EMS Press • M.J. Mossinghoff. "Small Salem numbers". Retrieved 2016-01-07. • Salem, R. (1963). Algebraic numbers and Fourier analysis. Heath mathematical monographs. Boston, MA: D. C. Heath and Company. Zbl 0126.07802. Algebraic numbers • Algebraic integer • Chebyshev nodes • Constructible number • Conway's constant • Cyclotomic field • Eisenstein integer • Gaussian integer • Golden ratio (φ) • Perron number • Pisot–Vijayaraghavan number • Quadratic irrational number • Rational number • Root of unity • Salem number • Silver ratio (δS) • Square root of 2 • Square root of 3 • Square root of 5 • Square root of 6 • Square root of 7 • Doubling the cube • Twelfth root of two  Mathematics portal
Wikipedia
Salem–Spencer set In mathematics, and in particular in arithmetic combinatorics, a Salem-Spencer set is a set of numbers no three of which form an arithmetic progression. Salem–Spencer sets are also called 3-AP-free sequences or progression-free sets. They have also been called non-averaging sets,[1][2] but this term has also been used to denote a set of integers none of which can be obtained as the average of any subset of the other numbers.[3] Salem-Spencer sets are named after Raphaël Salem and Donald C. Spencer, who showed in 1942 that Salem–Spencer sets can have nearly-linear size. However a later theorem of Klaus Roth shows that the size is always less than linear. Examples For $k=1,2,\dots $ the smallest values of $n$ such that the numbers from $1$ to $n$ have a $k$-element Salem-Spencer set are 1, 2, 4, 5, 9, 11, 13, 14, 20, 24, 26, 30, 32, 36, ... (sequence A065825 in the OEIS) For instance, among the numbers from 1 to 14, the eight numbers {1, 2, 4, 5, 10, 11, 13, 14} form the unique largest Salem-Spencer set.[4] This example is shifted by adding one to the elements of an infinite Salem–Spencer set, the Stanley sequence 0, 1, 3, 4, 9, 10, 12, 13, 27, 28, 30, 31, 36, 37, 39, 40, ... (sequence A005836 in the OEIS) of numbers that, when written as a ternary number, use only the digits 0 and 1. This sequence is the lexicographically first infinite Salem–Spencer set.[5] Another infinite Salem–Spencer set is given by the cubes 0, 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, ... (sequence A000578 in the OEIS) It is a theorem of Leonhard Euler that no three cubes are in arithmetic progression.[6] Size See also: Erdős conjecture on arithmetic progressions § Progress and related results, and Roth's theorem on arithmetic progressions § Improving Bounds In 1942, Salem and Spencer published a proof that the integers in the range from $1$ to $n$ have large Salem–Spencer sets, of size $n/e^{O(\log n/\log \log n)}$.[7] The denominator of this expression uses big O notation, and grows more slowly than any power of $n$, so the sets found by Salem and Spencer have a size that is nearly linear. This bound disproved a conjecture of Paul Erdős and Pál Turán that the size of such a set could be at most $n^{1-\delta }$ for some $\delta >0$.[4][8] The construction of Salem and Spencer was improved by Felix Behrend in 1946, who found sets of size $n/e^{O({\sqrt {\log n}})}$.[9] In 1952, Klaus Roth proved Roth's theorem establishing that the size of a Salem-Spencer set must be $O(n/\log \log n)$.[10] Therefore, although the sets constructed by Salem, Spencer, and Behrend have sizes that are nearly linear, it is not possible to improve them and find sets whose size is actually linear. This result became a special case of Szemerédi's theorem on the density of sets of integers that avoid longer arithmetic progressions.[4] To distinguish Roth's bound on Salem–Spencer sets from Roth's theorem on Diophantine approximation of algebraic numbers, this result has been called Roth's theorem on arithmetic progressions.[11] After several additional improvements to Roth's theorem,[12][13][14][15] the size of a Salem–Spencer set has been proven to be $O{\bigl (}n(\log \log n)^{4}/\log n{\bigr )}$.[16] An even better bound of $O{\bigl (}n/(\log n)^{1+\delta }{\bigr )}$ (for some $\delta >0$ that has not been explicitly computed) was announced in 2020 but has not yet been refereed and published.[17] In 2023 a new bound of $2^{-O((\log N)^{c})}\cdot N$[18][19] was found and four days later the result was simplified with a little improvement to $\exp(-c(\log N)^{1/11})N$,[20] these results have not yet been refereed and published either. Construction A simple construction for a Salem–Spencer set (of size considerably smaller than Behrend's bound) is to choose the ternary numbers that use only the digits 0 and 1, not 2. Such a set must be progression-free, because if two of its elements $x$ and $y$ are the first and second members of an arithmetic progression, the third member must have the digit two at the position of the least significant digit where $x$ and $y$ differ.[4] The illustration shows a set of this form, for the three-digit ternary numbers (shifted by one to make the smallest element 1 instead of 0). Behrend's construction uses a similar idea, for a larger odd radix $2d-1$. His set consists of the numbers whose digits are restricted to the range from $0$ to $d-1$ (so that addition of these numbers has no carries), with the extra constraint that the sum of the squares of the digits is some chosen value $k$.[9] If the digits of each number are thought of as coordinates of a vector, this constraint describes a sphere in the resulting vector space, and by convexity the average of two distinct values on this sphere will be interior to the sphere rather than on it.[21] Therefore, if two elements of Behrend's set are the endpoints of an arithmetic progression, the middle value of the progression (their average) will not be in the set. Thus, the resulting set is progression-free.[9] With a careful choice of $d$, and a choice of $k$ as the most frequently-occurring sum of squares of digits, Behrend achieves his bound.[9] In 1953, Leo Moser proved that there is a single infinite Salem–Spencer sequence achieving the same asymptotic density on every prefix as Behrend's construction.[1] By considering the convex hull of points inside a sphere, rather than the set of points on a sphere, it is possible to improve the construction by a factor of ${\sqrt {\log n}}$.[21][22] However, this does not affect the size bound in the form stated above. Generalization The notion of Salem–Spencer sets (3-AP-free set) can be generalized to $k$-AP-free sets, in which $k$ elements form an arithmetic progression if and only if they are all equal. Rankin (1961) gave constructions of large $k$-AP-free sets.[23] Computational results Gasarch, Glenn, and Kruskal have performed a comparison of different computational methods for large subsets of $\{1,\dots n\}$ with no arithmetic progression.[2] Using these methods they found the exact size of the largest such set for $n\leq 187$. Their results include several new bounds for different values of $n$, found by branch-and-bound algorithms that use linear programming and problem-specific heuristics to bound the size that can be achieved in any branch of the search tree. One heuristic that they found to be particularly effective was the thirds method, in which two shifted copies of a Salem–Spencer set for $n$ are placed in the first and last thirds of a set for $3n$.[2] Applications abcdefgh 8 8 77 66 55 44 33 22 11 abcdefgh Five queens on the main diagonal of a chessboard, attacking all other squares. The vacant squares on the diagonal are in rows 1, 3, and 7, an all-odd Salem–Spencer set. In connection with the Ruzsa–Szemerédi problem, Salem–Spencer sets have been used to construct dense graphs in which each edge belongs to a unique triangle.[24] Salem–Spencer sets have also been used in theoretical computer science. They have been used in the design of the Coppersmith–Winograd algorithm for fast matrix multiplication,[25] and in the construction of efficient non-interactive zero-knowledge proofs.[26] Recently, they have been used to show size lower bounds for graph spanners,[27] and the strong exponential time hypothesis based hardness of the subset sum problem.[28] These sets can also be applied in recreational mathematics to a mathematical chess problem of placing as few queens as possible on the main diagonal of an $n\times n$ chessboard so that all squares of the board are attacked. The set of diagonal squares that remain unoccupied must form a Salem–Spencer set, in which all values have the same parity (all odd or all even). The smallest possible set of queens is the complement of the largest Salem–Spencer subset of the odd numbers in $\{1,\dots n\}$. This Salem-Spencer subset can be found by doubling and subtracting one from the values in a Salem–Spencer subset of all the numbers in $\{1,\dots n/2\}.$[29] References 1. Moser, Leo (1953), "On non-averaging sets of integers", Canadian Journal of Mathematics, 5: 245–252, doi:10.4153/cjm-1953-027-0, MR 0053140, S2CID 124488483 2. Gasarch, William; Glenn, James; Kruskal, Clyde P. (2008), "Finding large 3-free sets. I. The small n case" (PDF), Journal of Computer and System Sciences, 74 (4): 628–655, doi:10.1016/j.jcss.2007.06.002, MR 2417032 3. Abbott, H. L. (1976), "On a conjecture of Erdős and Straus on non-averaging sets of integers", Proceedings of the Fifth British Combinatorial Conference (Univ. Aberdeen, Aberdeen, 1975), Congressus Numerantium, vol. XV, Winnipeg, Manitoba: Utilitas Math., pp. 1–4, MR 0406967 4. Dybizbański, Janusz (2012), "Sequences containing no 3-term arithmetic progressions", Electronic Journal of Combinatorics, 19 (2): P15:1–P15:5, doi:10.37236/2061, MR 2928630 5. Sloane, N. J. A. (ed.), "Sequence A005836", The On-Line Encyclopedia of Integer Sequences, OEIS Foundation 6. Erdős, P.; Lev, V.; Rauzy, G.; Sándor, C.; Sárközy, A. (1999), "Greedy algorithm, arithmetic progressions, subset sums and divisibility", Discrete Mathematics, 200 (1–3): 119–135, doi:10.1016/S0012-365X(98)00385-9, MR 1692285 7. Salem, R.; Spencer, D. C. (December 1942), "On Sets of Integers Which Contain No Three Terms in Arithmetical Progression", Proceedings of the National Academy of Sciences, 28 (12): 561–563, Bibcode:1942PNAS...28..561S, doi:10.1073/pnas.28.12.561, PMC 1078539, PMID 16588588 8. Erdős, Paul; Turán, Paul (1936), "On some sequences of integers" (PDF), Journal of the London Mathematical Society, 11 (4): 261–264, doi:10.1112/jlms/s1-11.4.261, MR 1574918 9. Behrend, F. A. (December 1946), "On sets of integers which contain no three terms in arithmetical progression", Proceedings of the National Academy of Sciences, 32 (12): 331–332, Bibcode:1946PNAS...32..331B, doi:10.1073/pnas.32.12.331, PMC 1078964, PMID 16578230 10. Roth, Klaus (1952), "Sur quelques ensembles d'entiers", Comptes rendus de l'Académie des Sciences, 234: 388–390, MR 0046374 11. Bloom, Thomas; Sisask, Olaf (2019), "Logarithmic bounds for Roth's theorem via almost-periodicity", Discrete Analysis, 2019 (4), arXiv:1810.12791v2, doi:10.19086/da.7884, S2CID 119583263 12. Heath-Brown, D. R. (1987), "Integer sets containing no arithmetic progressions", Journal of the London Mathematical Society, Second Series, 35 (3): 385–394, doi:10.1112/jlms/s2-35.3.385, MR 0889362 13. Szemerédi, E. (1990), "Integer sets containing no arithmetic progressions", Acta Mathematica Hungarica, 56 (1–2): 155–158, doi:10.1007/BF01903717, MR 1100788 14. Bourgain, J. (1999), "On triples in arithmetic progression", Geometric and Functional Analysis, 9 (5): 968–984, doi:10.1007/s000390050105, MR 1726234, S2CID 392820 15. Sanders, Tom (2011), "On Roth's theorem on progressions", Annals of Mathematics, Second Series, 174 (1): 619–636, arXiv:1011.0104, doi:10.4007/annals.2011.174.1.20, MR 2811612, S2CID 53331882 16. Bloom, T. F. (2016), "A quantitative improvement for Roth's theorem on arithmetic progressions", Journal of the London Mathematical Society, Second Series, 93 (3): 643–663, arXiv:1405.5800, doi:10.1112/jlms/jdw010, MR 3509957, S2CID 27536138 17. Bloom, Thomas; Sisask, Olaf (2020), Breaking the logarithmic barrier in Roth's theorem on arithmetic progressions, arXiv:2007.03528; see also Kalai, Gil (July 8, 2020), "To cheer you up in difficult times 7: Bloom and Sisask just broke the logarithm barrier for Roth's theorem!", Combinatorics and more 18. Kelley, Zander; Meka, Raghu (2023-02-10). "Strong Bounds for 3-Progressions". arXiv:2302.05537 [math.NT]. 19. Sloman, Leila (2023-03-21). "Surprise Computer Science Proof Stuns Mathematicians". Quanta Magazine. 20. Bloom, Thomas F.; Sisask, Olof (2023-02-14). "The Kelley--Meka bounds for sets free of three-term arithmetic progressions". arXiv:2302.07211 [math.NT]. 21. Elkin, Michael (2011), "An improved construction of progression-free sets", Israel Journal of Mathematics, 184: 93–128, arXiv:0801.4310, doi:10.1007/s11856-011-0061-1, MR 2823971 22. Green, Ben; Wolf, Julia (2010), "A note on Elkin's improvement of Behrend's construction", in Chudnovsky, David; Chudnovsky, Gregory (eds.), Additive number theory: Festschrift in honor of the sixtieth birthday of Melvyn B. Nathanson, New York: Springer, pp. 141–144, arXiv:0810.0732, doi:10.1007/978-0-387-68361-4_9, MR 2744752, S2CID 10475217 23. Rankin, R. A. (1961), "XXIV: Sets of integers containing not more than a given number of terms in arithmetical progression", Proceedings of the Royal Society of Edinburgh, Section A: Mathematical and Physical Sciences, 65 (4): 332–344, doi:10.1017/S0080454100017726, S2CID 122037820 24. Ruzsa, I. Z.; Szemerédi, E. (1978), "Triple systems with no six points carrying three triangles", Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, Colloq. Math. Soc. János Bolyai, vol. 18, Amsterdam and New York: North-Holland, pp. 939–945, MR 0519318 25. Coppersmith, Don; Winograd, Shmuel (1990), "Matrix multiplication via arithmetic progressions", Journal of Symbolic Computation, 9 (3): 251–280, doi:10.1016/S0747-7171(08)80013-2, MR 1056627 26. Lipmaa, Helger (2012), "Progression-free sets and sublinear pairing-based non-interactive zero-knowledge arguments", in Cramer, Ronald (ed.), Theory of Cryptography: 9th Theory of Cryptography Conference, TCC 2012, Taormina, Sicily, Italy, March 19–21, 2012, Proceedings, Lecture Notes in Computer Science, vol. 7194, Springer, pp. 169–189, doi:10.1007/978-3-642-28914-9_10 27. Abboud, Amir; Bodwin, Greg (2017), "The 4/3 additive spanner exponent is tight", Journal of the ACM, 64 (4): A28:1–A28:20, arXiv:1511.00700, doi:10.1145/3088511, MR 3702458, S2CID 209870748 28. Abboud, Amir; Bringmann, Karl; Hermelin, Danny; Shabtay, Dvir (2019), "SETH-based lower bounds for subset sum and bicriteria path", in Chan, Timothy M. (ed.), Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, Society for Industrial and Applied Mathematics, pp. 41–57, arXiv:1704.04546, doi:10.1137/1.9781611975482.3, S2CID 15802062 29. Cockayne, E. J.; Hedetniemi, S. T. (1986), "On the diagonal queens domination problem", Journal of Combinatorial Theory, Series A, 42 (1): 137–139, doi:10.1016/0097-3165(86)90012-9, MR 0843468 External links • Nonaveraging sets search, Jarek Wroblewski, University of Wrocław
Wikipedia
Salinon The salinon (meaning 'salt-cellar' in Greek) is a geometrical figure that consists of four semicircles. It was first introduced in the Book of Lemmas, a work attributed to Archimedes.[1] Construction Let A, D, E, and B be four points on a line in the plane, in that order, with AD = EB. Let O be the bisector of segment AB (and of DE). Draw semicircles above line AB with diameters AB, AD, and EB, and another semicircle below with diameter DE. A salinon is the figure bounded by these four semicircles.[2] Properties Area Archimedes introduced the salinon in his Book of Lemmas by applying Book II, Proposition 10 of Euclid's Elements. Archimedes noted that "the area of the figure bounded by the circumferences of all the semicircles [is] equal to the area of the circle on CF as diameter."[3] Namely, if $r_{1}$ is the radius of large enclosing semicircle, and $r_{2}$ is the radius of the small central semicircle, then the area of the salinon is:[4] $A={\frac {1}{4}}\pi \left(r_{1}+r_{2}\right)^{2}.$ Arbelos Should points D and E converge with O, it would form an arbelos, another one of Archimedes' creations, with symmetry along the y-axis.[3] See also • Lune of Hippocrates References 1. Heath, T. L. (1897). "On the Salinon of Archimedes". The Journal of Philology. 25 (50): 161–163. 2. Nelsen, Roger B. (April 2002). "Proof without words: The area of a salinon". Mathematics Magazine. 75 (2): 130. doi:10.2307/3219147. JSTOR 3219147. 3. Bogomolny, Alexander. "Salinon: From Archimedes' Book of Lemmas". Cut-the-knot. Retrieved 2008-04-15. 4. Weisstein, Eric W. "Salinon". MathWorld. External links • L’arbelos. Partie II by Hamza Khelif at www.images.math.cnrs.fr of CNRS
Wikipedia
Paul Sally Paul Joseph Sally, Jr. (January 29, 1933 – December 30, 2013) was a professor of mathematics at the University of Chicago,[1] where he was the director of undergraduate studies for 30 years.[2][3] His research areas were p-adic analysis and representation theory.[4] Paul Sally Paul Sally in 2008 Born Paul Joseph Sally, Jr. (1933-01-29)January 29, 1933 Roslindale, Boston, Massachusetts, U.S. DiedDecember 30, 2013(2013-12-30) (aged 80) Chicago, Illinois, U.S. CitizenshipUnited States Alma materBoston College (BS, MS) Brandeis University (PhD) Known forMathematics education Scientific career InstitutionsUniversity of Chicago Doctoral advisorRay Kunze He created several programs to improve the preparation of school mathematics teachers, and was seen by many as "a legendary math professor at the University of Chicago."[5] Life and education Sally was born in the Roslindale neighborhood of Boston, Massachusetts on January 29, 1933.[6][7] He was a star basketball player at Boston College High School.[4][7] He received his BS and MS degrees from Boston College in 1954 and 1956.[8] After a short career in Boston area high schools and at Boston College[9] he entered the first class of mathematics graduate students at Brandeis in 1957 [4] and earned his PhD in 1965.[6] During his graduate career he married Judith D. Sally and had three children in three years. David, the oldest, is a Visiting Associate Professor of Business Administration at Tuck School of Business at Dartmouth College,[4][10] Stephen is a partner at Ropes & Gray,[4][11] and Paul, the youngest, is Superintendent at New Trier High School.[4][12] Sally was diagnosed with type 1 diabetes in 1948.[13] The condition resulted in his use of an eye patch and two prosthetic legs,[14] which caused him to be widely referred to as "Professor Pirate," and "The Math Pirate" around the University of Chicago campus.[7] He was known to detest cell phones in class and has destroyed several over the years by inviting students to stomp on them or by throwing them out of a window.[4] Career Sally joined the University of Chicago faculty in 1965 and taught there until his death.[4] He was a member of the Institute for Advanced Study from 1967–68, 1971–72, 1981–82, and 1983–84.[15] While at the IAS he collaborated with Joseph Shalika.[16] In 1983, he became the first director of the University of Chicago School Mathematics Project, which is responsible for the Everyday Mathematics program (also called "Chicago math").[4] He founded Seminars for Elementary Specialists and Mathematics Educators (SESAME) in 1992.[4] He co-founded the Young Scholars Program with Dr. Diane Herrmann in 1988, providing mathematical enrichment for gifted Chicago-area students in grades 7–12.[4][17] Death Sally died December 30, 2013, aged 80, from congestive heart failure, at the University of Chicago Hospital.[2][18][19] Awards • Amoco Foundation Award for Long-Term Excellence in Undergraduate Teaching, 1995[8][20] • American Mathematical Society Distinguished Service Award, 2000[21] • Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics of the Mathematical Association of America, 2002[21] • Fellow of the American Mathematical Society, 2012.[22] Selected publications • Sally, P.J., Jr.; Shalika, J.A. (1968). "Characters of the discrete series of representations of SL(2) over a local field". Proceedings of the National Academy of Sciences of the United States of America. Proc. Natl. Acad. Sci. U.S.A. 61 (4): 1231–1237. Bibcode:1968PNAS...61.1231S. doi:10.1073/pnas.61.4.1231. PMC 225245. PMID 16591722.{{cite journal}}: CS1 maint: multiple names: authors list (link) • Sally, Judith (2003). Trimathlon: A Workout Beyond the School Curriculum. AK Peters, Ltd. ISBN 978-1-56881-184-0. • Sally, Jr., Paul J.; Diane L. Herrmann (2004). Number, Shape and Symmetry: an Introduction to Mathematics. Pacific Grove: Brooks Cole. ISBN 0-534-40539-8. • Sally, Jr., Paul J.; Diane L. Herrmann (2005). Number Theory and Geometry for College Students. Pacific Grove: Brooks Cole. ISBN 0-534-40536-3. • Sally, Judith (2007). Roots to Research: A Vertical Development of Mathematical Problems. Providence, RI: American Mathematical Society. ISBN 978-0-8218-4403-8.[23] • Sally, Jr., Paul J. (2008). Tools of the Trade: Introduction to Advanced Mathematics. Providence, RI: American Mathematical Society. ISBN 978-0-8218-4634-6. References 1. "Department of Mathematics: People". University of Chicago. Retrieved 2008-09-16. 2. Crane, Joy (2013-12-30). "Paul Sally, influential math professor, dies at 80". Chicago Maroon. Retrieved 2013-12-30. 3. "Department of Mathematics: About". University of Chicago. Retrieved 2008-09-16. 4. Golus, Carrie (May–June 2008). "Sally marks the spot". University of Chicago Magazine. 100 (4). Retrieved 2008-09-16. 5. Billy Baker (2008-04-28). "A life of unexpected twists takes her from farm to math department". Boston.com. The Boston Globe. Retrieved 2008-09-17. 6. "Biographies of Candidates" (PDF). Notices of the American Mathematical Society. 49 (8): 970–81. September 2002. Retrieved 2008-09-16. 7. Billy Baker (2007-10-01). "The powerhouse 'pirate' of the math classroom". The Boston Globe. Retrieved 2008-04-15. 8. Steele, Diana (1995-05-25). "Amoco Teaching Award: Paul Sally". University of Chicago Chronicle. Retrieved 2008-09-16. 9. "Sally Award". Boston College. Retrieved 2008-09-17. 10. "Tuck School of Business Faculty Directory". Dartmouth College. Retrieved 2011-05-08. 11. "Ropes & Gray Professional Directory". Ropes & Gray. Retrieved 2011-05-08. 12. "New Trier High School Staff Directory". New Trier High School. Archived from the original on 2011-01-25. Retrieved 2011-05-08. 13. Shaw, Susan (March 2004). "Keeping Your Toes & Feet Healthy". Diabetes Health. Archived from the original on 2008-05-10. Retrieved 2008-09-16. 14. "Paul J. Sally, Jr., influential mathematician and educator, 1933 – 2013". 15. "Past Members Alphabetical: S | IAS School of Mathematics". Math.ias.edu. 29 August 2008. Retrieved 2016-10-21. 16. Sally Jr, P. J.; Shalika, J.A. (1968). "Characters of the discrete series of representations of SL(2) over a local field". Proc. Natl. Acad. Sci. U.S.A. 61 (4): 1231–1237. Bibcode:1968PNAS...61.1231S. doi:10.1073/pnas.61.4.1231. PMC 225245. PMID 16591722. 17. "Paul Sally Gives the Arnold Ross Lecture" (PDF). AMS Member Newsletter. American Mathematical Society: 4. Winter 2004. Retrieved 2008-09-16. 18. Paul Sally Jr. Obituary, Chicago Tribune, retrieved 2014-01-01. 19. "Chicago Tribune Obituary". Chicago Tribune. Retrieved 16 January 2014. 20. Koppes, Steve (2003-01-23). "Sally says students need more than math 'appreciation'". University of Chicago Chronicle. Retrieved 2008-09-16. 21. "Mathematical Association of America: Deborah and Franklin Tepper Haimo Awards for Distinguished College or University Teaching of Mathematics" (PDF). January 2002 Prizes and Awards. San Diego, CA: Joint Mathematics Meetings. 2002-01-07. pp. 36–40. Retrieved 2008-09-16. 22. List of Fellows of the American Mathematical Society; Ams.org, retrieved 2013-07-11. 23. Holdener, Judy (October 2009). "Review: Roots to Research: A Vertical Development of Mathematical Problems by Judith Sally and Paul J. Sally, Jr". Amer. Math. Monthly. 116 (8): 754–758. doi:10.4169/193009709X460921. JSTOR 40391219. S2CID 218545393. External links • Paul Sally at IMDb • Paul Sally at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Sally Elizabeth Carlson Sally Elizabeth Carlson (October 2, 1896 – November 1, 2000) was an American mathematician,[1] the first woman and one of the first two people to obtain a doctorate in mathematics from the University of Minnesota.[1][2] Sally Elizabeth Carlson Born(1896-10-02)October 2, 1896 Minneapolis, Minnesota DiedNovember 1, 2000(2000-11-01) (aged 104) NationalityAmerican Academic background Alma materUniversity of Minnesota ThesisThe Convergence of Certain Methods of Closest Approximation (1924) Doctoral advisorDunham Jackson Academic work DisciplineMathematics Sub-disciplineFunctional analysis InstitutionsUniversity of Minnesota Notable studentsMargaret P. Martin Early life and education Carlson was born in Minneapolis to a large working-class family of Swedish immigrants. She became her high school valedictorian in 1913, graduated from the University of Minnesota in 1917, and earned a master's degree there in 1918. After teaching mathematics for two years, she returned to graduate study in 1920, and completed her Ph.D. at Minnesota in 1924. Both students were supervised by Dunham Jackson;[1] Carlson's dissertation, in functional analysis, was On The Convergence of Certain Methods of Closest Approximation.[3] Career and contributions She joined the Minnesota faculty, and remained there until her retirement in 1965 as a full professor.[1] She has no record of supervising doctoral dissertations,[3] and published little research after the work of her own dissertation. However, she supervised several master's students, and was described as a mentor by Margaret P. Martin, who completed her Ph.D. at Minnesota in 1944.[4] Recognition Carlson won a Distinguished Teacher Award at Minnesota.[1] After her 2000 death, the library of the University of Minnesota memorialized her in an exhibit, titled "Elizabeth Carlson, notable alumna".[1] References 1. Green, Judy; LaDuke, Jeanne (2008), "Carlson, Elizabeth", Pioneering Women in American Mathematics: The Pre-1940 PhD's, History of Mathematics, vol. 34, American Mathematical Society, The London Mathematical Society, pp. 153–154, ISBN 978-0-8218-4376-5 2. Riddle, Larry (June 2, 2016), "The First Ph.D.'s", Biographies of Women Mathematicians, Agnes Scott College, retrieved 2017-11-18 3. Sally Elizabeth Carlson at the Mathematics Genealogy Project 4. Murray, Margaret A. M. (2001), Women Becoming Mathematicians: Creating a Professional Identity in Post-World War II America, MIT Press, p. 100, ISBN 9780262632461 Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Sally Shlaer Sally hashim Shlaer (December 3, 1938 – November 12, 1998) was an American mathematician, software engineer and methodologist,[1] known as co-developer of the 1980s Shlaer–Mellor method for software development. Sally Shlaer Born(1938-12-03)December 3, 1938 Cleveland, Ohio DiedNovember 12, 1998(1998-11-12) (aged 59) Berkeley, California CitizenshipUSA Alma materStanford University Known forShlaer–Mellor method Scientific career FieldsComputer Science InstitutionsProject Technology, Inc. Biography Born in Cleveland, Ohio, Shlaer received a BS in Mathematics in 1960 from Stanford University and started a graduate study at the Australian National University. At Stanford Shlaer had started programming in Fortran and assembler. In 1965 she started as a software engineer at Los Alamos National Laboratory. In 1977 she became project manager in software development at Lawrence Berkeley Laboratory, where she guided the development of a new Integrated Control System for the Bay Area Rapid Transit System.[1] At Lawrence Berkeley, Laboratory Shlaer met Stephen J. Mellor, with whom she developed the Shlaer–Mellor method for software development. In 1985 together they founded the software development firm Project Technology Inc. Shlaer was also a Fellows of the Association for Computing Machinery. Work Software engineering Shlaer started her software engineering career at Los Alamos National Laboratory as a programmer. She designed and implemented an operating system to operate an electron accelerator to work in real time, and this project became her masterpiece.[2] At Lawrence Berkeley Laboratory, she led a team of software developers to build a new control system for the subway of the Bay Area Rapid Transit system. The existing control system software was considered impossible to continue using, making replacement necessary. Working with Steve Mellor, they replaced the original Fortran and assembly language code with new code, going from seventy thousand lines to two thousand. This analysis has since been called "legendary".[2] Shlaer–Mellor method In the developing of a new control system for the Bay Area Rapid Transit, Shlaer and Mellor sought to regulate mechanisms of software development and began to design new methods of project management.[2] This resulted in the development of the Shlaer–Mellor method, which in the new millennium has evolved into Executable UML.[3] Publications • 1988. Object Oriented Systems Analysis: Modeling the World in Data. With Stephen J. Mellor. Prentice Hall, 1988. • 1991. Object Life Cycles: Modeling the World In States. With Stephen J. Mellor. Prentice Hall, 1991. Articles, a selection:[4] • 1992. "A Comparison of OOA and OMT" Project Technology, Inc. White paper • 1996. "The Shlaer-Mellor Method". Project Technology, Inc. White paper • 1997. "Recursive Design of an Application-Independent Architecture" With Stephen J. Mellor in IEEE Software, January 1997. References 1. Sally Shlaer by J.L. Pimsleur, 1999 2. M. Page-Jones (1999) "Sally Shlaer Obituary" in The C++ report. Vol 11. p. 82 3. Mellor, S; Balcer, M: "Executable UML: A foundation for model-driven architecture", Preface, Addison Wesley, 2002 4. Sally Shlaer DBLP Bibliography Server External links Wikiquote has quotations related to Sally Shlaer. • Sally Shlaer Obituary by J.L. Pimsleur, 1999 • Sally Shlaer Obituary by M. Page-Jones, 1999 • Sally Shlaer Up-Close and Personal Conversation on geekchic.com/replique Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Japan • Netherlands Academics • Association for Computing Machinery • DBLP • zbMATH Other • IdRef
Wikipedia
George Salmon George Salmon FBA FRS FRSE (25 September 1819 – 22 January 1904) was a distinguished and influential Irish mathematician and Anglican theologian. After working in algebraic geometry for two decades, Salmon devoted the last forty years of his life to theology. His entire career was spent at Trinity College Dublin. George Salmon Born(1819-09-25)25 September 1819 Dublin Died22 January 1904(1904-01-22) (aged 84) Trinity College Dublin SpouseFrances Anne Salvador AwardsRoyal Medal (1868) Copley Medal (1889) Personal life Salmon was born in Dublin, to Michael Salmon and Helen Weekes (the daughter of the Reverend Edward Weekes), but he spent his boyhood in Cork City, where his father Michael was a linen merchant. He attended Hamblin and Porter's School there before starting at Trinity College in 1833. In 1837 he won a scholarship and graduated from Trinity in 1839 with first-class honours in mathematics. In 1841 at the age of 21, he attained a paid fellowship and teaching position in mathematics at Trinity. In 1845 he was additionally appointed to a position in theology at the university, after having been ordained a deacon in 1844 and a priest in the Church of Ireland in 1845. He remained at Trinity for the rest of his career. He died at the Provost's House on 22 January 1904 and was buried in Mount Jerome Cemetery, Dublin.[1] He was an avid reader throughout his life, and his obituary refers to him as "specially devoted to the novels of Jane Austen."[2] Family In 1844 he married Frances Anne Salvador, daughter of Rev J L Salvador of Staunton-upon-Wye in Herefordshire, with whom he had six children, of which only two survived him. Mathematics In the late 1840s and the 1850s Salmon was in regular and frequent communication with Arthur Cayley and J. J. Sylvester. The three of them, together with a small number of other mathematicians (including Charles Hermite), were developing a system for dealing with n-dimensional algebra and geometry. During this period Salmon published about 36 papers in journals. In these papers for the most part he solved narrowly defined, concrete problems in algebraic geometry, as opposed to more broadly systematic or foundational questions. But he was an early adopter of the foundational innovations of Cayley and the others. In 1859 he published the book Lessons Introductory to the Modern Higher Algebra (where the word "higher" means n-dimensional). This was for a while simultaneously the state-of-the-art and the standard presentation of the subject, and went through updated and expanded editions in 1866, 1876 and 1885, and was translated into German and French. From 1858 to 1867 he was the Donegall Lecturer in Mathematics at Trinity. Meanwhile, back in 1848 Salmon had published an undergraduate textbook entitled A Treatise on Conic Sections. This text remained in print for over fifty years, going through five updated editions in English, and was translated into German, French and Italian. Salmon himself did not participate in the expansions and updates of the more later editions. The German version, which was a "free adaptation" by Wilhelm Fiedler, was popular as an undergraduate text in Germany. Salmon also published two other mathematics texts, A Treatise on Higher Plane Curves (1852) and A Treatise on the Analytic Geometry of Three Dimensions (1862). These too were in print for a long time and went through a number of later editions, with Salmon delegating the work of the later editions to others. In 1858 he was presented with the Cunningham Medal of the Royal Irish Academy. In June 1863 he was elected a Fellow of the Royal Society followed in 1868 by the award of their Royal Medal "For his researches in analytical geometry and the theory of surfaces". In 1889 Salmon received the Copley Medal of the society, the highest honorary award in British science, but by then he had long since lost his interest in mathematics and science. Salmon received honorary degrees from several universities, including that of Doctor mathematicae (honoris causa) from the Royal Frederick University on 6 September 1902, when they celebrated the centennial of the birth of mathematician Niels Henrik Abel.[3][4] Salmon's theorem is named in honor of George Salmon. Theology From the early 1860s onward Salmon was primarily occupied with theology. In 1866 he was appointed Regius Professor of Divinity at TCD, at which point he resigned from his position in the mathematics department at TCD. In 1871 he accepted an additional post of chancellor of St. Patrick's Cathedral, Dublin. One of his early publications in theology was in 1853 as a contributor to a book of rebuttals to the Tracts for the Times. Arguments against Roman Catholicism were a recurring theme in Salmon's theology and culminated in his widely read 1888 book Infallibility of the Church in which he argued that certain beliefs of the Roman church were absurd, especially the beliefs in the infallibility of the church and the infallibility of the pope. Salmon also wrote books about eternal punishment, miracles, and interpretation of the New Testament. His book An Historical Introduction to the Study of the Books of the New Testament, which was widely read, is an account of the reception and interpretation of the gospels in the early centuries of Christianity as seen through the writings of leaders such as Irenaeus and Eusebius. Chess Salmon was a keen chess player. He was a patron to the University Chess Club,[5] and was also the President of Dublin Chess Club from 1890–1903.[6] He participated in the second British Chess Congress and had the honour of playing the chess prodigy Paul Morphy in Birmingham, England, on 27 August 1858.[7][8] He beat Daniel Harrwitz in an interesting game.[9] Even in his book Infallibility of the Church, Salmon mentions chess a few times: • He argues that the doctrine of papal infallibility is vitally important for opponents of Catholicism to refute; otherwise all other arguments would be of little importance, as when a chessplayer wins many pieces but his king is checkmated. • In another chess reference Salmon said that if one met someone who says that he has never been beaten, this player could be given rook odds. Thus "the delusion of invincibility can never grow up in the mind of anyone except one who has never met a strong antagonist."[10] • Salmon said that if one played someone who would normally receive queen odds, then one would go easy and not be too strict, e.g. allowing take-backs. Thus he is so convinced that the Popes have erred that he is not threatened by acknowledging when they have been right. Provost of Trinity College Dublin Salmon was Provost of Trinity from 1888 until his death in 1904. The highlight of his career may have been when in 1892 he presided over the great celebrations marking the tercentenary of the College, which had been founded by Queen Elizabeth I. Admission of women to Trinity In 1870, Trinity had introduced the Examinations for Women, following a request from Alexandra College.[11] In 1880, while Humphrey Lloyd was provost, Samuel Haughton, Anthony Traill, John Jellett and others proposed that degrees be open to women, on the same terms as men. Lloyd, as provost, was not a supporter, and the motion was defeated. In 1881, Jellett became provost, and committee was set up in 1882 to investigate the matter, including future provosts Salmon and Trail, respectively opposing and supporting admission. Despite the support of the provost, the committee was not effective. Salmon was provost during the campaign for admission by the Central Association of Irish Schoolmistresses (CAISM), in which Alice Oldham was an important figure. Salmon and the board were not generally receptive to the campaign. While Salmon was a conservative, his strong opposition to the admission of women cannot be dismissed simply; he had been a member of the council of Alexandra College, had supported girls competing on equal terms with boys in Intermediate examinations and his daughter, from the provost's house, had acted as coordinator for the Examinations for Women and was a member of CAISM.[11][12] In 1896, all eight members of the board were over 70 years of age, but by 1901 retirements and deaths had resulted in the majority of the board being pro-admission.[13] In 1902, John Mahaffy proposed that the time had come to take action on the issue of awarding degrees to women. This was passed by the board, and, though the motion was opposed by Salmon, a committee was set up to report, and by the end of the year the board resolved that the Lord Lieutenant, William Ward should be petitioned to move the king to issue new Letters Patent for admission of Women.[11] In 1903, Ward replied, indicating that the agreement of the provost was essential before Letters Patent would be issued. Salmon wrote withdrawing his formal objections in July 1903. The Letters Patten were received by the board on 16 January 1904. This was Salmon's last board meeting.[11] He is alleged to have said that women would only be admitted to Trinity as students over his dead body. Coincidentally, immediately after his death on 22 January 1904, Isabel Marion Weir Johnston became the first woman undergraduate to succeed in registering at Trinity, and by the end of year dozens of other women had done likewise.[14] She recalled, "When I arrived in Dublin 1904, I was informed that he [Salmon] had died that day, and the examination had to be put off until after the funeral."[15] Death Salmon continued to attend board meetings up to his death.[16] At his death, Salmon had been a familiar figure in Trinity for over 62 years, and was held in affection even by those who disagreed with him.[13][16] Both Trail and Mahaffy were eager to succeed Salmon as provost, and were lobbying to secure the position on the day of his death. Just before his death, Salmon is said to have anticipated this in another apocryphal story. He dreamed that he was dead, and his funeral was processing across front square, followed by weeping Fellows and Scholars. His coffin was laid in the chapel, "and then", he said, "I sat up in my coffin, whereupon Mahaffy and Trail wept louder than ever".[13] Bibliography • 1848: A Treatise on Conic Sections, Third edition, 1855, Fourth edition, 1863 via Internet Archive • 1852: A Treatise on Higher Plane Curves: Intended as a sequel to a Treatise on Conic Sections, Third edition, 1879 • 1859: Lessons Introductory to the Modern Higher Algebra 172 pages. 2nd edition (1866) 326 pages. 3rd edition, 1876 354 pages. 4th edition (1885) 360 pages (with some additions by Cathcart to the chapters on binary quantics). 5th edition (1964) 376 pages ISBN 978-0828401500 (the contents of the 4th edition, together with some sections from the 2nd edition omitted in the 3rd and 4th editions). • 1862: A Treatise on the Analytic Geometry of Three Dimensions; 5th edition, 1915 via Internet Archive, Reviews:[17][18] • 1864: The Eternity of Future Punishment • 1873: The Reign of Law • 1881: Non-miraculous Christianity • 1885: Introduction to the New Testament • 1888: The Infallibility of the Church, Third edition, 1899 • 1897: Some Thoughts on the Textual Criticism of the New Testament via Internet Archive See also • Cubic surface • Glossary of invariant theory • Quaternary cubic • Ternary quartic • Salmon points References 1. Biographical Index of Former Fellows of the Royal Society of Edinburgh 1783–2002 (PDF). The Royal Society of Edinburgh. July 2006. ISBN 0-902-198-84-X. 2. "Men and Women," The Sphere. 6 February 1904, p. 124 3. "Foreign degrees for British men of Science". The Times. No. 36867. London. 8 September 1902. p. 4. 4. "Honorary doctorates from the University of Oslo 1902-1910". (in Norwegian) 5. History of Dublin University Chess Society Archived 29 June 2008 at the Wayback Machine. chesssoc.org 6. Luce, A.A. (1967) A History of Dublin Chess Club, Irish Printers Ltd, Dublin. 7. Paul Morphy vs George Salmon, Birmingham, 27 August 1858. Chessgames.com 8. Sergeant, Philip W. (1937) Morphy's Games of Chess. G. Bell and Sons Ltd. 9. Harding, Tim (2010) Playing the Morphy Number Game, ChessCafe.com. 10. The Infallibility of the Church, London: John Murray, 4th ed. 1914, p. 111. 11. Parkes, Susan M., ed. (2004). "The Campaign for Admission, 1870-1904". A Danger to the Men? A History of Women in Trinity College Dublin 1904-2004. Dublin: Lilliput Press. ISBN 978-1-84351-040-6. 12. Dublin University Calendar for the year 1897. 1897. 13. McDowell, R.B.; Webb, D.A. (2004) [1982]. Trinity College Dublin 1592-1952 An academic history. Dublin: Trinity College Dublin Press and Environmental Publications. ISBN 1-871408-25-3. 14. Royal Irish Academy, Dictionary of Irish Biography – George Salmon by Roderick Gow Archived 15 November 2008 at the Wayback Machine 15. Have Women Made a Difference in Irish Universities? 1850-2010 By Judith Harford 16. "Death of the Provost of Trinity College". Irish Times. 23 January 1904. Retrieved 11 April 2021. 17. Snyder, Virgil (1912). "Review: A Treatise on the Analytic Geometry of Three Dimensions, vol. 1, by George Salmon". Bull. Amer. Math. Soc. 19 (2): 80–83. doi:10.1090/S0002-9904-1912-02287-5. 18. Snyder, Virgil (1915). "Review: ''A Treatise on the Analytic Geometry of Three Dimensions, vol. 2, by George Salmon". Bull. Amer. Math. Soc. 22 (3): 147–149. doi:10.1090/S0002-9904-1915-02744-8. Further reading • C. J. Joly (1905) "George Salmon 1819 — 1904", Proceedings of the Royal Society 75:347–55. External links Wikiquote has quotations related to George Salmon. Wikisource has the text of a 1905 New International Encyclopedia article about "George Salmon". • O'Connor, John J.; Robertson, Edmund F., "George Salmon", MacTutor History of Mathematics Archive, University of St Andrews • Sarah Nesbitt (2005) George Salmon: from Mathematics to Theology Archived 3 March 2016 at the Wayback Machine from University of Saint Andrews. • G. Salmon (1879) Treatise on Conic Sections, link from University of Michigan Historical Math Collection. • Salmon's Tracts from Evangelical Tracts • Rod Gow (1997) George Salmon: His Mathematical Work and Influence from Bulletin of the Irish Mathematical Society. • George Salmon player profile and games at Chessgames.com Provosts of Trinity College Dublin • Adam Loftus • Walter Travers • Henry Alvey • William Temple • William Bedell • Robert Ussher • William Chappell • Richard Washington • Anthony Martin • Samuel Winter • Thomas Seele • Michael Ward • Narcissus Marsh • Robert Huntington • St George Ashe • George Browne • Peter Browne • Benjamin Pratt • Richard Baldwin • Francis Andrews • John Hely-Hutchinson • Richard Murray • John Kearney • George Hall • Thomas Elrington • Samuel Kyle • Bartholomew Lloyd • Franc Sadleir • Richard MacDonnell • Humphrey Lloyd • John Hewitt Jellett • George Salmon • Anthony Traill • John Pentland Mahaffy • John Bernard • Edward Gwynn • William Thrift • Ernest Alton • Albert Joseph McConnell • F. S. L. Lyons • William Arthur Watts • Thomas Mitchell • John Hegarty • Patrick Prendergast • Linda Doyle Regius Professors of Divinity University of Oxford • Richard Smyth • Peter Martyr • Richard Smyth • Juan de Villagarcia • Richard Smyth • Lawrence Humphrey • Thomas Holland • Robert Abbot • John Prideaux • Robert Sanderson • Robert Crosse • Joshua Hoyle • John Conant • Robert Sanderson • William Creed • Richard Allestree • William Jane • John Potter • George Rye • John Fanshawe • Edward Bentham • Benjamin Wheeler • John Randolph • Charles Henry Hall • William Howley • William Van Mildert • Frodsham Hodson • Charles Lloyd • Edward Burton • Renn Hampden • William Jacobson • Robert Payne Smith • James Bowling Mozley • William Ince • Henry Scott Holland • Arthur Headlam • Henry Leighton Goudge • Oliver Chase Quick • Leonard Hodgson • Henry Chadwick • Maurice Wiles • Keith Ward • Marilyn McCord Adams • Graham Ward University of Cambridge • Edward Wigan • John Madew • Martin Bucer • John Young • Thomas Sedgwick • James Pilkington • Leonard Pilkington • Matthew Hutton • John Whitgift • William Chaderton • William Whitaker • John Overall • John Richardson • Samuel Collins • John Arrowsmith • Anthony Tuckney • Peter Gunning • Joseph Beaumont • Henry James • Richard Bentley • John Whalley • John Green • Thomas Rutherforth • Richard Watson • John Kaye • Thomas Turton • Alfred Ollivant • James Amiraux Jeremie • Brooke Foss Westcott • Henry Barclay Swete • Vincent Henry Stanton • Alexander Nairne • Charles Earle Raven • Arthur Michael Ramsey • John Burnaby • Edward C. Ratcliff • Dennis Eric Nineham • Geoffrey Hugo Lampe • Henry Chadwick • Stephen Sykes • David Frank Ford • Ian Alexander McFarland • David Fergusson Trinity College Dublin • Luke Challoner • James Ussher • Samuel Ward • Joshua Hoyle • Richard Lingard • Michael Ward • William Palliser • George Browne • Owen Lloyd • Richard Baldwin • Claudius Gilbert • Henry Clarke • John Pellisier • John Lawson • James Drought • Richard Graves • Charles Richard Elrington • Joseph Henderson Singer • Samuel Butcher • George Salmon • John Gwynn • John Ernest Leonard Oulton • Richard Randall Hartford • Hugh Frederic Woodhouse Copley Medallists (1851–1900) • Richard Owen (1851) • Alexander von Humboldt (1852) • Heinrich Wilhelm Dove (1853) • Johannes Peter Müller (1854) • Léon Foucault (1855) • Henri Milne-Edwards (1856) • Michel Eugène Chevreul (1857) • Charles Lyell (1858) • Wilhelm Eduard Weber (1859) • Robert Bunsen (1860) • Louis Agassiz (1861) • Thomas Graham (1862) • Adam Sedgwick (1863) • Charles Darwin (1864) • Michel Chasles (1865) • Julius Plücker (1866) • Karl Ernst von Baer (1867) • Charles Wheatstone (1868) • Henri Victor Regnault (1869) • James Prescott Joule (1870) • Julius Robert von Mayer (1871) • Friedrich Wöhler (1872) • Hermann von Helmholtz (1873) • Louis Pasteur (1874) • August Wilhelm von Hofmann (1875) • Claude Bernard (1876) • James Dwight Dana (1877) • Jean-Baptiste Boussingault (1878) • Rudolf Clausius (1879) • James Joseph Sylvester (1880) • Charles Adolphe Wurtz (1881) • Arthur Cayley (1882) • William Thomson (1883) • Carl Ludwig (1884) • Friedrich August Kekulé von Stradonitz (1885) • Franz Ernst Neumann (1886) • Joseph Dalton Hooker (1887) • Thomas Henry Huxley (1888) • George Salmon (1889) • Simon Newcomb (1890) • Stanislao Cannizzaro (1891) • Rudolf Virchow (1892) • George Gabriel Stokes (1893) • Edward Frankland (1894) • Karl Weierstrass (1895) • Karl Gegenbaur (1896) • Albert von Kölliker (1897) • William Huggins (1898) • John William Strutt (1899) • Marcellin Berthelot (1900) Founding fellows of the British Academy • The Earl of Rosebery • The Viscount Dillon • The Lord Reay • Arthur Balfour • John Morley • The Lord Bryce • William Edward Hartpole Lecky • Sir William Anson • Sir Frederick Pollock • Sir Edward Maunde Thompson • Sir Henry Maxwell Lyte • Sir Courtenay Ilbert • Sir Richard Claverhouse Jebb • David Monro • Sir Adolphus Ward • Edward Caird • Henry Francis Pelham • Sir John Rhŷs • George Salmon • J. B. Bury • Samuel Butcher • Ingram Bywater • Edward Byles Cowell • William Cunningham • Thomas Rhys Davids • A. V. Dicey • Samuel Rolles Driver • Robinson Ellis • Sir Arthur Evans • Andrew Martin Fairbairn • Robert Flint • Sir James George Frazer • Sir Israel Gollancz • Thomas Hodgkin • Shadworth Hodgson • Sir Thomas Erskine Holland • Frederic William Maitland • Alfred Marshall • John E. B. Mayor • Sir James Murray • Sir William Mitchell Ramsay • William Sanday • Walter William Skeat • Sir Leslie Stephen • Whitley Stokes • Henry Barclay Swete • Henry Fanshawe Tozer • Robert Yelverton Tyrrell • James Ward Authority control International • FAST • ISNI • VIAF National • Spain • France • BnF data • Catalonia • Germany • Israel • United States • Japan • Czech Republic • Australia • Greece • Netherlands • Poland • Portugal • Vatican Academics • CiNii • MathSciNet • zbMATH People • Ireland • Deutsche Biographie • Trove Other • SNAC • IdRef
Wikipedia
Salomon Eduard Gubler Salomon Eduard Gubler (7 July 1845 – 6 November 1921) was a Swiss mathematician. With Johann Heinrich Graf he published Einleitung in Die Theorie Der Bessel'schen Funktionen (A Treatise on the Theory of Bessel Functions) in two volumes (1898–1900). He was the author of very appreciated textbooks on mathematics and numerous reports about the methodology and organization on mathematics teaching, and he was a member of the Swiss commission for the teaching of mathematics and founder of the Swiss association of teachers of mathematics.[1] His main research interest was the Bessel functions.[2] Salomon Eduard Gubler Born(1845-07-07)7 July 1845 Wila, Switzerland Died6 November 1921(1921-11-06) (aged 76) Zürich, Switzerland Alma materUniversity of Bern SpouseElise Margreth Iselin Scientific career FieldsMathematics InstitutionsSecondary schools in Zürich ThesisVerwandlung einer hypergeometrischen reihe im an das integral $\int _{0}^{\infty }J_{(x)}^{a}e^{-bx}x^{c-1}\,dx$ (1894 published) Doctoral advisorLudwig Schläfli Life and work Gubler graduated in the university of Bern in 1870 as Ludwig Schläfli's student. There are no records in the university about his doctoral thesis published in 1894.[3] He spent his academic career in secondary schools,[1] but it seems that he also taught in the University of Zurich.[2] He retired in 1914. References 1. Fehr 1921–1922, p. 83. 2. Eminger 2015, p. 110. 3. Eminger 2015, p. 109. Bibliography • Eminger, Stefanie (2012). "Viribus unitis! shall be our watchword: the first International Congress of Mathematicians, held 9–11 August 1897 in Zurich". BSHM Bulletin. 27 (3): 155–168. doi:10.1080/17498430.2012.687496. ISSN 1749-8430. S2CID 121968603. • Eminger, Stefanie Ursula (2015). Carl Friedrich Geiser and Ferdinand Rudio: The Men Behind the First International Congress of Mathematicians (PDF). St Andrews University. • Fehr, H. (1921–1922). "Nécrologie Ed. Gubler". L'Enseignement Mathématique (in French). 22: 83. ISSN 0013-8584. External links • O'Connor, John J.; Robertson, Edmund F., "Salomon Eduard Gubler", MacTutor History of Mathematics Archive, University of St Andrews Authority control International • ISNI • VIAF National • Germany • Czech Republic • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
Salvatore Pincherle Salvatore Pincherle (March 11, 1853 – July 10, 1936) was an Italian mathematician. He contributed significantly to (and arguably helped to found) the field of functional analysis, established the Italian Mathematical Union (Italian: "Unione Matematica Italiana"), and was president of the Third International Congress of Mathematicians. The Pincherle derivative is named after him. Salvatore Pincherle BornMarch 11, 1853 Monday Trieste, Austrian Empire DiedJuly 10, 1936 (1936-07-11) (aged 83) Bologna, Italy NationalityItalian Known forPincherle derivative Pincherle polynomials AwardsFellow of the Royal Society of Edinburgh Scientific career FieldsFunctional analysis Institutions • University of Palermo • University of Bologna • Italian Mathematical Union Doctoral students • Carlo Severini • Ugo Amaldi Pincherle was born into a Jewish family in Trieste (then part of the Austrian Littoral) and spent his childhood in Marseille, France. After completing his basic schooling in Marseille, he left in 1869 to study mathematics at the University of Pisa, where he was a student under both Enrico Betti and Ulisse Dini. After he graduated in 1874, he taught at a school in Pavia until he received a scholarship in 1877. After winning the scholarship and studying abroad at the University of Berlin, Pincherle met Karl Weierstrass. Pincherle contributed to Weierstrass' theory of analytic functions, and in 1880, influenced by Weierstrass, he wrote an expository paper in the Giornale di Matematiche, which proved to be a significant paper in the field of analysis. Throughout his life, Pincherle's work greatly reflected the influence that Weierstrass had on him. He later collaborated with Vito Volterra and explored Laplace transforms and other parts of functional analysis. From 1880 until 1928, Pincherle was a Professor of Mathematics at the University of Bologna. In 1901, collaborating with Ugo Amaldi, he published his main scientific book, Le Operazioni Distributive e loro Applicazioni all'Analisi. In Bologna in 1922, he established the Italian Mathematical Union and became its first President and held the position until 1936. In 1924, he attended the Second International Congress of Mathematicians in Toronto, Ontario, Canada. Four years later, he became President of the Third International Congress and played a significant role in re-admitting German mathematicians after a ban imposed because of World War I. At this Congress, Jacques Hadamard declared in his review lecture Le développement et le rôle scientifique du Calcul fonctionnel that Pincherle was one of the most prominent founders of functional analysis. Following the Third Congress, Pincherle retired from university. In honor of the centenary of his birth, the Italian Mathematical Union edited a selection of 62 of his notes and treatises; they were published in 1954 in Rome. References • O'Connor, John J.; Robertson, Edmund F., "Salvatore Pincherle", MacTutor History of Mathematics Archive, University of St Andrews • Mainardi, Francesco; Gianni Pagnini (April 2003). "Salvatore Pincherle: the pioneer of the Mellin-Barnes integrals". Journal of Computational and Applied Mathematics. 153 (1): 331–342. arXiv:math/0702520. Bibcode:2003JCoAM.153..331M. doi:10.1016/S0377-0427(02)00609-X. S2CID 14117680. External links • Salvatore Pincherle at the Mathematics Genealogy Project Authority control International • FAST • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Italy • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Italian People • Deutsche Biographie Other • IdRef
Wikipedia
Saly Ruth Ramler Saly Ruth Ramler (1894–1993), also known as Saly Ruth Struik, was the first woman to receive a mathematics PhD from the German University in Prague, now known as Charles University.[1] Her 1919 dissertation, on the axioms of affine geometry, was supervised by Gerhard Kowalewski and Georg Alexander Pick.[2] She married the Dutch mathematician and historian of mathematics Dirk Jan Struik in 1923. Between 1924 and 1926, the pair traveled Europe and met many prominent mathematicians, using Dirk Struik's Rockefeller fellowship. In 1926, they emigrated to the United States, and Dirk Struik accepted a position at MIT.[3] Saly Ruth Ramler BornNovember 10, 1894 Kolomyia Died1993 Alma materCharles University in Prague Scientific career FieldsMathematics Doctoral advisorGerhard Kowalewski Georg Alexander Pick References 1. Bečvářová, Martina (2018). "Saly Ruth Struik, 1894–1993". The Mathematical Intelligencer. 40 (4): 79–85. doi:10.1007/s00283-018-9835-1. S2CID 126187647. 2. "Saly Ruth (Ramler) Struik". The Mathematics Genealogy Project. Retrieved 5 November 2018. 3. "Remembering Dirk Jan Struik, 1894-2000". Mathematical Association of America. Retrieved 6 November 2018. External links • Saly Ruth Ramler at the Mathematics Genealogy Project Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Samarendra Kumar Mitra Samarendra Kumar Mitra (14 March 1916 – 26 September 1998) was an Indian scientist and mathematician. He designed, developed and constructed, in 1953-54, India's first computer (an electronic analog computer) at the Indian Statistical Institute (ISI), Calcutta (presently Kolkata). He began his career as a research physicist at the Palit Laboratory of Physics, Rajabazar Science College (University of Calcutta). In 1950, he joined the Indian Statistical Institute (ISI), Calcutta, an Institute of National importance, where he worked in various capacities such as professor, research professor and director. Samarendra Kumar Mitra সমরেন্দ্র কুমার মিত্র Born(1916-03-14)14 March 1916 Calcutta, British India Died26 September 1998 (aged 82) Kolkata Alma mater • Presidency College (University of Calcutta) • Rajabazar Science College (University of Calcutta) Known forDesigned, developed and constructed India's first indigenous computer (an electronic analog computer) in ISI in 1953 AwardsCunningham Memorial Prize in Chemistry Scientific career FieldsChemistry Computer science Institutions • Indian Statistical Institute • Council of Scientific & Industrial Research (CSIR,India) • Harvard University • Institute for Advanced Study, Princeton, United States • Mathematical Laboratory, University of Cambridge, U.K. • Palit Laboratory of Physics, University of Calcutta • UNTAA Adviser on Computing, Moscow Doctoral advisorMeghnad Saha Other academic advisorsS. N. Bose InfluencesAlbert Einstein, Wolfgang Pauli, John von Neumann, Niels Bohr, Robert Oppenheimer Notes Mitra was recognized as the father of computers in India by the Calcutta Mathematical Society Mitra was the founder and first head of the Computing Machines and Electronics Division at the Indian Statistical Institute (ISI), Calcutta. In 1953-54, India's first indigenous electronic analogue computer for solving linear equations with 10 variables and related problems was designed and developed by Samarendra Kumar Mitra and was built under his direct personal supervision and guidance by Ashish Kumar Maity in the Computing Machines and Electronics Laboratory at the[1] (ISI), Calcutta. This computer was used in computation of numerical solutions of simultaneous linear equations using a modified version of Gauss–Seidel iteration. Subsequently, in 1963, the ISI, Calcutta began design and development of the first second-generation indigenous digital computer of India in joint collaboration with Jadavpur University (JU), Calcutta. This collaboration was primarily led by Mitra, as he was the Head of the Computing Machines and Electronics Laboratory, ISI. He designed, developed, and constructed a general purpose High Speed Electronic Digital Computer, namely called the ISIJU computer (Indian Statistical Institute – Jadavpur University Computer). Under the leadership of Mitra, the first second-generation indigenous digital computer of India was produced, namely the transistor-driven machine ISIJU-1, which became operational in 1964. The Computer and Communication Sciences Division of Indian Statistical Institute (ISI) was started under Samarendra Kumar Mitra and has produced many eminent scientists. The first annual convention of the Computer Society of India (CSI) was hosted by ISI in 1965. Mitra was a self-taught scholar with wide-ranging interests in varied fields such as mathematics, physics, chemistry, biology, poultry science, Sanskrit language, philosophy, religion and literature. Biography Samarendra Kumar Mitra known as “the father of Indian computer revolution" was born on 14 March 1916, in Calcutta, the eldest of two children. He was the only son and had a younger sister. His father was Sir Rupendra Coomar Mitter and his mother was Lady Sudhahasinee Mitter. His father, Sir Rupendra Coomar Mitter, was an MSc in mathematics, gold medalist and also a gold medalist in Law from the University of Calcutta and was an advocate by profession who practiced in the Calcutta High Court from 1913 to 1934. In 1934, Sir Rupendra Coomar Mitter was appointed as a Judge, Calcutta High Court and was Chief Justice (Acting) in 1947 during independence of India and continued as a Judge until 1950. Additionally, he was knighted in 1926. Thereafter, he was the Chairman of the Labour Appellate Tribunal from 1950 to 1955. Education Samarendra Kumar Mitra studied at the Bowbazar High School, Calcutta, and completed his Matriculation in 1st Division in 1931. In 1933, he did his Intermediate in Science (I.Sc.) in 1st Division from Presidency College (presently Presidency University), Calcutta (now Kolkata). In 1935, he did his Bachelor in Science with Honours (B.Sc. Hons) in Chemistry, with 2nd rank, from Presidency College (presently Presidency University), Calcutta (now Kolkata) and was awarded the Cunningham Memorial Prize in Chemistry. In 1937, he completed his Master in Science (M.Sc.) Chemistry and in 1940 his Master in Science (M.Sc.) in Applied Mathematics from the Rajabazar Science College, University of Calcutta, Calcutta. In later years, he was working towards his PhD in Physics under Professor Meghnad Saha, but did not pursue it after his mentor's death in 1956. Career He worked as a research physicist under the Council of Scientific & Industrial Research (CSIR,India) scheme on the design and development of an air-driven ultracentrifuge, at the Palit Laboratory of Physics, University of Calcutta, from 1944 to 1948. He was awarded an UNESCO Special Fellowship on the study of High Speed Computing Machines in the United States of America and the United Kingdom during 1949–50 and worked at Harvard University, Institute for Advanced Study, Princeton, United States and at the Mathematical Laboratory, University of Cambridge, U.K. During his time at the Institute for Advanced Study, he became close with numerous eminent physicists and mathematicians, such as Albert Einstein, Wolfgang Pauli, John von Neumann. And, attended lectures of Niels Bohr and Robert Oppenheimer. In fact, it is known that he had many discussions with Albert Einstein and spent a lot of his time with him (while he was at Princeton). He worked in various capacities from 1950 to 1976 at the Indian Statistical Institute (ISI), Calcutta, such as, professor, research professor and director. The Computing Machines and Electronics Division at the ISI, Calcutta was founded by Mitra.[2] In 1953-54 he designed and constructed the first computer built in India. This was an electronic analogue computer for solving linear equations with ten variables and related problems.[3] He was UNTAA Adviser on Computing, Moscow, and was responsible for bringing a massive technical aid to India from the U.S.S.R, amounting to nearly one crore rupees under UNTTA, 1955. He was an adviser to the Ministry of Defense, Government of India, for computation of ballistic trajectories. Under his advice the firing table for the first gun produced in India in 1962 was made. He was a member of the Indian National Committee for Space Research from 1962 to 64. In 1963, he was the leader of the team for the design and construction of a general purpose high speed electronic digital computer, the ISI-JU computer (Indian Statistical Institute-Jadavpur University). He was a Technical Adviser during 1969–1976 to the Union Public Service Commission, Government of India. He had several research publications in mathematics, theoretical physics, and computer science. He travelled on work to United States, United Kingdom, Soviet Union, Switzerland, France, Czechoslovakia and Afghanistan. He was a member of the Calcutta Mathematical Society, Indian Association for the Cultivation of Science, Association for Computing Machinery, U.S. and the Indian Statistical Institute, India. He was Professor Emeritus and chairman, Calcutta Mathematical Society and Professor of the N.R. Sen Center for Pedagogical Mathematics. His other interests included translating from Sanskrit books of Scientific interest, such as Vaisheshik Darshan by Maharishi Kanada, a Hindu sage and philosopher. References 1. Indian Statistical Institute 2. Menon, Nikhil (March 2018). "'Fancy Calculating Machine': Computers and planning in independent India". Modern Asian Studies. 52 (2): 421–457. doi:10.1017/S0026749X16000135. ISSN 0026-749X. S2CID 148820998. 3. Menon, Nikhil (March 2018). "'Fancy Calculating Machine': Computers and planning in independent India". Modern Asian Studies. 52 (2): 421–457. doi:10.1017/S0026749X16000135. ISSN 0026-749X. S2CID 148820998. Further reading • Devaprasanna Sinha (August 2012). "Glimpsing through Early Days of Computers in Kolkata". Computer Society of India. pp. 5–6. Retrieved 17 November 2012. • "50 Years of IT: Disrupting Moments: 1956–1965: The Beginning". Dataquest magazine, India. 30 December 2006. Retrieved 18 November 2012. Authority control: Academics • MathSciNet • zbMATH
Wikipedia
List of logic symbols In logic, a set of symbols is commonly used to express logical representation. The following table lists many common symbols, together with their name, how they should be read out loud, and the related field of mathematics. Additionally, the subsequent columns contains an informal explanation, a short example, the Unicode location, the name for use in HTML documents,[1] and the LaTeX symbol. Basic logic symbols Symbol Unicode value (hexadecimal) HTML value (decimal) HTML entity (named) LaTeX symbol Logic Name Read as Category Explanation Examples ⇒ → ⊃ U+21D2 U+2192 U+2283 &#8658; &#8594; &#8835; &rArr; &rarr; &sup; $\Rightarrow $\Rightarrow $\implies $\implies $\to $\to or \rightarrow $\supset $\supset material implication implies; if ... then propositional logic, Heyting algebra $A\Rightarrow B$ is false when A is true and B is false but true otherwise. $\rightarrow $ may mean the same as $\Rightarrow $ (the symbol may also indicate the domain and codomain of a function; see table of mathematical symbols). $\supset $ may mean the same as $\Rightarrow $ (the symbol may also mean superset). $x=2\Rightarrow x^{2}=4$ is true, but $x^{2}=4\Rightarrow x=2$ is in general false (since x could be −2). ⇔ ≡ ↔ U+21D4 U+2261 U+2194 &#8660; &#8801; &#8596; &hArr; &equiv; &LeftRightArrow; $\Leftrightarrow $\Leftrightarrow $\equiv $\equiv $\leftrightarrow $\leftrightarrow $\iff $\iff material equivalence if and only if; iff; means the same as propositional logic $A\Leftrightarrow B$ is true only if both A and B are false, or both A and B are true. $x+5=y+2\Leftrightarrow x+3=y$ ¬ ˜ ! U+00AC U+02DC U+0021 &#172; &#732; &#33; &not; &tilde; &excl; $\neg $\lnot or \neg $\sim $\sim negation not propositional logic The statement $\lnot A$ is true if and only if A is false. A slash placed through another operator is the same as $\neg $ placed in front. $\neg (\neg A)\Leftrightarrow A$ $x\neq y\Leftrightarrow \neg (x=y)$ $\mathbb {D} $ U+1D53B &#120123; &Dopf; \mathbb{D} Domain of discourse Domain of predicate Predicate (mathematical logic) $\mathbb {D} \mathbb {:} \mathbb {R} $ ∧ · & U+2227 U+00B7 U+0026 &#8743; &#183; &#38; &and; &middot; &amp; $\wedge $\wedge or \land $\cdot $\cdot $\&$\&[2] logical conjunction and propositional logic, Boolean algebra The statement A ∧ B is true if A and B are both true; otherwise, it is false. n < 4  ∧  n >2  ⇔  n = 3 when n is a natural number. ∨ + ∥ U+2228 U+002B U+2225 &#8744; &#43; &#8741; &or; &plus; &parallel; $\lor $\lor or \vee $\parallel $\parallel logical (inclusive) disjunction or propositional logic, Boolean algebra The statement A ∨ B is true if A or B (or both) are true; if both are false, the statement is false. n ≥ 4  ∨  n ≤ 2  ⇔ n ≠ 3 when n is a natural number. ↮ ⊕ ⊻ ≢ U+21AE U+2295 U+22BB U+2262 &#8622; &#8853; &#8891; &#8802; &oplus; &veebar; &nequiv; $\oplus $\oplus $\veebar $\veebar $\not \equiv $\not\equiv exclusive disjunction xor; either ... or propositional logic, Boolean algebra The statement A ↮ B is true when either A or B, but not both, are true. A ⊻ B means the same. (¬A) ↮ A is always true, and A ↮ A always false, if vacuous truth is excluded. ⊤ T 1 ■ U+22A4 U+25A0 &#8868; &top; $\top $\top Tautology top, truth, full clause propositional logic, Boolean algebra, first-order logic ⊤ is unconditionally true. ⊤(A) ⇒ A is always true. ⊥ F 0 □ U+22A5 U+25A1 &#8869; &perp; $\bot $\bot Contradiction bottom, falsum, falsity, empty clause propositional logic, Boolean algebra, first-order logic ⊥ is unconditionally false. (The symbol ⊥ may also refer to perpendicular lines.) ⊥(A) ⇒ A is always false. ∀ () U+2200 &#8704; &forall; $\forall $\forall universal quantification for all; for any; for each first-order logic ∀ x: P(x) or (x) P(x) means P(x) is true for all x. $\forall n\in \mathbb {N} :n^{2}\geq n.$ ∃ U+2203 &#8707; &exist; $\exists $\exists existential quantification there exists first-order logic ∃ x: P(x) means there exists at least one x such that P(x) is true. $\exists n\in \mathbb {N} :$ :} n is even. ∃! U+2203 U+0021 &#8707; &#33; &exist;! $\exists !$ !} \exists ! uniqueness quantification there exists exactly one first-order logic ∃! x: P(x) means there exists exactly one x such that P(x) is true. $\exists !n\in \mathbb {N} :n+5=2n.$ ≔ ≡ :⇔ U+2254 (U+003A U+003D) U+2261 U+003A U+21D4 &#8788; (&#58; &#61;) &#8801; &#8860; &coloneq; &equiv; &hArr; $:=$ :=} := $\equiv $\equiv $:\Leftrightarrow $ :\Leftrightarrow } :\Leftrightarrow definition is defined as everywhere x ≔ y or x ≡ y means x is defined to be another name for y ( The symbol ≡ can also mean other things, such as congruence). P :⇔ Q means P is defined to be logically equivalent to Q. $\cosh x:={\frac {e^{x}+e^{-x}}{2}}$ A ⊕ B :⇔ (A ∨ B) ∧ ¬(A ∧ B) ( ) U+0028 U+0029 &#40; &#41; &lpar; &rpar; $(~)$ ( ) precedence grouping parentheses; brackets everywhere Perform the operations inside the parentheses first. (8 ÷ 4) ÷ 2 = 2 ÷ 2 = 1, but 8 ÷ (4 ÷ 2) = 8 ÷ 2 = 4. ⊢ U+22A2 &#8866; &vdash; $\vdash $\vdash turnstile proves propositional logic, first-order logic x ⊢ y means x proves (syntactically entails) y (A → B) ⊢ (¬B → ¬A) ⊨ U+22A8 &#8872; &vDash; $\vDash $\vDash, \models double turnstile models propositional logic, first-order logic x ⊨ y means x models (semantically entails) y (A → B) ⊨ (¬B → ¬A) Advanced and rarely used logical symbols These symbols are sorted by their Unicode value: Symbol Unicode value (hexadecimal) HTML value (decimal) HTML entity (named) LaTeX symbol Logic Name Read as Category Explanation Examples ̅ U+0305 COMBINING OVERLINE used format for denoting Gödel numbers. denoting negation used primarily in electronics. using HTML style "4̅" is a shorthand for the standard numeral "SSSS0". "A ∨ B" says the Gödel number of "(A ∨ B)". "A ∨ B" is the same as "¬(A ∨ B)". ↑ | U+2191 U+007C UPWARDS ARROW VERTICAL LINE Sheffer stroke, the sign for the NAND operator (negation of conjunction). ↓ U+2193 DOWNWARDS ARROW Peirce Arrow, the sign for the NOR operator (negation of disjunction). ⊙ U+2299 $\odot $\odot CIRCLED DOT OPERATOR the sign for the XNOR operator (negation of exclusive disjunction). ∁ U+2201 COMPLEMENT ∄ U+2204 ∄\nexists THERE DOES NOT EXIST strike out existential quantifier, same as "¬∃" ∴ U+2234 ∴\therefore THEREFORE Therefore ∵ U+2235 ∵\because BECAUSE because ⊧ U+22A7 MODELS is a model of (or "is a valuation satisfying") ⊨ U+22A8 ⊨\vDash TRUE is true of ⊬ U+22AC ⊬\nvdash DOES NOT PROVE negated ⊢, the sign for "does not prove" T ⊬ P says "P is not a theorem of T" ⊭ U+22AD ⊭\nvDash NOT TRUE is not true of † U+2020 DAGGER it is true that ... Affirmation operator ⊼ U+22BC NAND NAND operator ⊽ U+22BD NOR NOR operator ◇ U+25C7 WHITE DIAMOND modal operator for "it is possible that", "it is not necessarily not" or rarely "it is not probably not" (in most modal logics it is defined as "¬◻¬") ⋆ U+22C6 STAR OPERATOR usually used for ad-hoc operators ⊥ ↓ U+22A5 U+2193 UP TACK DOWNWARDS ARROW Webb-operator or Peirce arrow, the sign for NOR. Confusingly, "⊥" is also the sign for contradiction or absurdity. ⌐ U+2310 REVERSED NOT SIGN ⌜ ⌝ U+231C U+231D \ulcorner \urcorner TOP LEFT CORNER TOP RIGHT CORNER corner quotes, also called "Quine quotes"; for quasi-quotation, i.e. quoting specific context of unspecified ("variable") expressions;[3] also used for denoting Gödel number;[4] for example "⌜G⌝" denotes the Gödel number of G. (Typographical note: although the quotes appears as a "pair" in unicode (231C and 231D), they are not symmetrical in some fonts. In some fonts (for example Arial) they are only symmetrical in certain sizes. Alternatively the quotes can be rendered as ⌈ and ⌉ (U+2308 and U+2309) or by using a negation symbol and a reversed negation symbol ⌐ ¬ in superscript mode. ) ◻ □ U+25FB U+25A1 WHITE MEDIUM SQUARE WHITE SQUARE modal operator for "it is necessary that" (in modal logic), or "it is provable that" (in provability logic), or "it is obligatory that" (in deontic logic), or "it is believed that" (in doxastic logic); also as empty clause (alternatives: $\emptyset $ and ⊥) ⟛ U+27DB LEFT AND RIGHT TACK semantic equivalent ⟡ U+27E1 WHITE CONCAVE-SIDED DIAMOND never modal operator ⟢ U+27E2 WHITE CONCAVE-SIDED DIAMOND WITH LEFTWARDS TICK was never modal operator ⟣ U+27E3 WHITE CONCAVE-SIDED DIAMOND WITH RIGHTWARDS TICK will never be modal operator □ U+25A1 WHITE SQUARE always modal operator ⟤ U+25A4 WHITE SQUARE WITH LEFTWARDS TICK was always modal operator ⟥ U+25A5 WHITE SQUARE WITH RIGHTWARDS TIC will always be modal operator ⥽ U+297D \strictif RIGHT FISH TAIL sometimes used for "relation", also used for denoting various ad hoc relations (for example, for denoting "witnessing" in the context of Rosser's trick) The fish hook is also used as strict implication by C.I.Lewis $p$ ⥽ $q\equiv \Box (p\rightarrow q)$. See here for an image of glyph. Added to Unicode 3.2.0. ⨇ U+2A07 TWO LOGICAL AND OPERATOR Usage in various countries Poland As of 2014 in Poland, the universal quantifier is sometimes written ∧, and the existential quantifier as ∨. The same applies for Germany. Japan The ⇒ symbol is often used in text to mean "result" or "conclusion", as in "We examined whether to sell the product ⇒ We will not sell it". Also, the → symbol is often used to denote "changed to", as in the sentence "The interest rate changed. March 20% → April 21%". See also • Józef Maria Bocheński • List of notation used in Principia Mathematica • List of mathematical symbols • Logic alphabet, a suggested set of logical symbols • Logic gate § Symbols • Logical connective • Mathematical operators and symbols in Unicode • Non-logical symbol • Polish notation • Truth function • Truth table • Wikipedia:WikiProject Logic/Standards for notation References 1. "Named character references". HTML 5.1 Nightly. W3C. Retrieved 9 September 2015. 2. Although this character is available in LaTeX, the MediaWiki TeX system does not support it. 3. Quine, W.V. (1981): Mathematical Logic, §6 4. Hintikka, Jaakko (1998), The Principles of Mathematics Revisited, Cambridge University Press, p. 113, ISBN 9780521624985. Further reading • Józef Maria Bocheński (1959), A Précis of Mathematical Logic, trans., Otto Bird, from the French and German editions, Dordrecht, South Holland: D. Reidel. External links • Named character entities in HTML 4.0 Common logical symbols ∧  or  & and ∨ or ¬  or  ~ not → implies ⊃ implies, superset ↔  or  ≡ iff | nand ∀ universal quantification ∃ existential quantification ⊤ true, tautology ⊥ false, contradiction ⊢ entails, proves ⊨ entails, therefore ∴ therefore ∵ because  Philosophy portal  Mathematics portal Logic • Outline • History Major fields • Computer science • Formal semantics (natural language) • Inference • Philosophy of logic • Proof • Semantics of logic • Syntax Logics • Classical • Informal • Critical thinking • Reason • Mathematical • Non-classical • Philosophical Theories • Argumentation • Metalogic • Metamathematics • Set Foundations • Abduction • Analytic and synthetic propositions • Contradiction • Paradox • Antinomy • Deduction • Deductive closure • Definition • Description • Entailment • Linguistic • Form • Induction • Logical truth • Name • Necessity and sufficiency • Premise • Probability • Reference • Statement • Substitution • Truth • Validity Lists topics • Mathematical logic • Boolean algebra • Set theory other • Logicians • Rules of inference • Paradoxes • Fallacies • Logic symbols •  Philosophy portal • Category • WikiProject (talk) • changes Common mathematical notation, symbols, and formulas Lists of Unicode and LaTeX mathematical symbols • List of mathematical symbols by subject • Glossary of mathematical symbols • List of logic symbols Lists of Unicode symbols General • List of Unicode characters • Unicode block Alphanumeric • Mathematical Alphanumeric Symbols • Blackboard bold • Letterlike Symbols • Symbols for zero Arrows and Geometric Shapes • Arrows • Miscellaneous Symbols and Arrows • Geometric Shapes (Unicode block) Operators • Mathematical operators and symbols • Mathematical Operators (Unicode block) Supplemental Math Operators • Supplemental Mathematical Operators • Number Forms Miscellaneous • A • B • Technical • ISO 31-11 (Mathematical signs and symbols for use in physical sciences and technology) Typographical conventions and notations Language • APL syntax and symbols Letters • Diacritic • Letters in STEM • Greek letters in STEM • Latin letters in STEM Notation • Mathematical notation • Abbreviations • Notation in probability and statistics • List of common physics notations Meanings of symbols • Glossary of mathematical symbols • List of mathematical constants • Physical constants • Table of mathematical symbols by introduction date • List of typographical symbols and punctuation marks
Wikipedia
Madhava of Sangamagrama Mādhava of Sangamagrāma (Mādhavan)[5] (c. 1340 – c. 1425) was an Indian mathematician and astronomer who is considered as the founder of the Kerala school of astronomy and mathematics. One of the greatest mathematician-astronomers of the Late Middle Ages, Madhava made pioneering contributions to the study of infinite series, calculus, trigonometry, geometry, and algebra. He was the first to use infinite series approximations for a range of trigonometric functions, which has been called the "decisive step onward from the finite procedures of ancient mathematics to treat their limit-passage to infinity".[1] Madhava of Sangamagrama Bornc. 1340[1][2][3] (or c. 1350[4]) Sangamagrama, Kingdom of Cochin (modern day Irinjalakuda, Kerala, India) Diedc. 1425 (aged 75-85) Cochin, Vijayanagara Empire (modern day Kerala, India) OccupationAstronomer-mathematician Known forDiscovery of power series Expansions of trigonometric Sine, Cosine and Arctangent functions Infinite series summation formulae for π Notable workGolavāda, Madhyāmanayanaprakāra, Veṇvāroha, Sphuṭacandrāpti TitleGolavid (Master of Spherics) Biography Little is known about Mādhava's life with certainty. However, from scattered references to Mādhava found in diverse manuscripts, historians of Kerala school have pieced together informations about the mathematician. In a manuscript preserved in the Oriental Institute, Baroda, Madhava has been referred to as Mādhavan vēṇvārōhādīnām karttā ... Mādhavan Ilaññippaḷḷi Emprān.[5] It has been noted that the epithet 'Emprān' refers to the Emprāntiri community, to which Madhava might have belonged to. The term "Ilaññippaḷḷi" has been identified as a reference to the residence of Mādhava. This is corroborated by Mādhava himself. In his short work on the moon's positions titled Veṇvāroha, Mādhava says that he was born in a house named bakuḷādhiṣṭhita . . . vihāra.[6] This is clearly Sanskrit for Ilaññippaḷḷi. Ilaññi is the Malayalam name of the evergreen tree Mimusops elengi and the Sanskrit name for the same is Bakuḷa. Palli is a term for village. The Sanskrit house name bakuḷādhiṣṭhita . . . vihāra has also been interpreted as a reference to the Malayalam house name Iraññi ninna ppaḷḷi and some historians have tried to identify it with one of two currently existing houses with names Iriññanavaḷḷi and Iriññārapaḷḷi both of which are located near Irinjalakuda town in central Kerala.[6] This identification is far fetched because both names have neither phonetic similarity nor semantic equivalence to the word "Ilaññippaḷḷi".[7] Most of the writers of astronomical and mathematical works who lived after Madhava's period have referred to Madhava as "Sangamagrama Madhava" and as such it is important that the real import of the word "Sangamagrama" be made clear. The general view among many scholars is that Sangamagrama is the town of Irinjalakuda some 70 kilometers south of the Nila river and about 70 kilometers south of Cochin.[7] It seems that there is not much concrete ground for this belief except perhaps the fact that the presiding deity of an early medieval temple in the town, the Koodalmanikyam Temple, is worshiped as Sangameswara meaning the Lord of the Samgama and so Samgamagrama can be interpreted as the village of Samgameswara. But there are several places in Karnataka with samgama or its equivalent kūḍala in their names and with a temple dedicated to Samgamḗsvara, the lord of the confluence. (Kudalasangama in Bagalkot district is one such place with a celebrated temple dedicated to the Lord of the Samgama.)[7] There is a small town on the southern banks of the Nila river, around 10 kilometers upstream from Tirunavaya, called Kūḍallūr. The exact literal Sanskrit translation of this place name is Samgamagram: kūṭal in Malayalam means a confluence (which in Sanskrit is samgama) and ūr means a village (which in Sanskrit is grama). Also the place is at the confluence of the Nila river and its most important tributary, namely, the Kunti river. (There is no confluence of rivers near Irinjalakuada.) Incidentally there is still existing a Nambudiri (Malayali Brahmin) family by name Kūtallūr Mana a few kilometers away from the Kudallur village. The family has its origins in Kudallur village itself. For many generations this family hosted a great Gurukulam specialising in Vedanga.[7] That the only available manuscript of Sphuṭacandrāpti, a book authored by Madhava, was obtained from the manuscript collection of Kūtallūr Mana might strengthen the conjecture that Madhava might have had some association with Kūtallūr Mana.[8] Thus the most plausible possibility is that the forefathers of Madhava migrated from the Tulu land or thereabouts to settle in Kudallur village, which is situated on the southern banks of the Nila river not far from Tirunnavaya, a generation or two before his birth and lived in a house known as Ilaññippaḷḷi whose present identity is unknown.[7] Date There are also no definite evidences to pinpoint the period during which Madhava flourished. In his Venvaroha, Madhava gives a date in 1400 CE as the epoch. Madhava's pupil Parameshvara Nambudiri, the only known direct pupil of Madhava, is known to have completed his seminal work Drigganita in 1430 and the Paramesvara's date has been determined as c. 1360-1455. From such circumstantial evidences historians have assigned the date c. 1340 – c. 1425 to Madhava. Historiography Although there is some evidence of mathematical work in Kerala prior to Madhava (e.g., Sadratnamala c. 1300, a set of fragmentary results[9]), it is clear from citations that Madhava provided the creative impulse for the development of a rich mathematical tradition in medieval Kerala. However, except for a couple, most of Madhava's original works have been lost. He is referred to in the work of subsequent Kerala mathematicians, particularly in Nilakantha Somayaji's Tantrasangraha (c. 1500), as the source for several infinite series expansions, including sin θ and arctan θ. The 16th-century text Mahajyānayana prakāra (Method of Computing Great Sines) cites Madhava as the source for several series derivations for π. In Jyeṣṭhadeva's Yuktibhāṣā (c. 1530),[10] written in Malayalam, these series are presented with proofs in terms of the Taylor series expansions for polynomials like 1/(1+x2), with x = tan θ, etc. Thus, what is explicitly Madhava's work is a source of some debate. The Yukti-dipika (also called the Tantrasangraha-vyakhya), possibly composed by Sankara Variar, a student of Jyeṣṭhadeva, presents several versions of the series expansions for sin θ, cos θ, and arctan θ, as well as some products with radius and arclength, most versions of which appear in Yuktibhāṣā. For those that do not, Rajagopal and Rangachari have argued, quoting extensively from the original Sanskrit,[1] that since some of these have been attributed by Nilakantha to Madhava, some of the other forms might also be the work of Madhava. Others have speculated that the early text Karanapaddhati (c. 1375–1475), or the Mahajyānayana prakāra was written by Madhava, but this is unlikely.[3] Karanapaddhati, along with the even earlier Keralite mathematics text Sadratnamala, as well as the Tantrasangraha and Yuktibhāṣā, were considered in an 1834 article by C. M. Whish, which was the first to draw attention to their priority over Newton in discovering the Fluxion (Newton's name for differentials).[9] In the mid-20th century, the Russian scholar Jushkevich revisited the legacy of Madhava,[11] and a comprehensive look at the Kerala school was provided by Sarma in 1972.[12] Lineage There are several known astronomers who preceded Madhava, including Kǖṭalur Kizhār (2nd century),[13] Vararuci (4th century), and Śaṅkaranārāyaṇa (866 AD). It is possible that other unknown figures preceded him. However, we have a clearer record of the tradition after Madhava. Parameshvara was a direct disciple. According to a palm leaf manuscript of a Malayalam commentary on the Surya Siddhanta, Parameswara's son Damodara (c. 1400–1500) had Nilakantha Somayaji as one of his disciples. Jyeshtadeva was a disciple of Nilakantha. Achyutha Pisharadi of Trikkantiyur is mentioned as a disciple of Jyeṣṭhadeva, and the grammarian Melpathur Narayana Bhattathiri as his disciple.[10] Contributions If we consider mathematics as a progression from finite processes of algebra to considerations of the infinite, then the first steps towards this transition typically come with infinite series expansions. It is this transition to the infinite series that is attributed to Madhava. In Europe, the first such series were developed by James Gregory in 1667. Madhava's work is notable for the series, but what is truly remarkable is his estimate of an error term (or correction term).[14] This implies that he understood very well the limit nature of the infinite series. Thus, Madhava may have invented the ideas underlying infinite series expansions of functions, power series, trigonometric series, and rational approximations of infinite series.[15] However, as stated above, which results are precisely Madhava's and which are those of his successors is difficult to determine. The following presents a summary of results that have been attributed to Madhava by various scholars. Infinite series Among his many contributions, he discovered infinite series for the trigonometric functions of sine, cosine, arctangent, and many methods for calculating the circumference of a circle. One of Madhava's series is known from the text Yuktibhāṣā, which contains the derivation and proof of the power series for inverse tangent, discovered by Madhava.[16] In the text, Jyeṣṭhadeva describes the series in the following manner: The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, .... The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank. It is laid down that the sine of the arc or that of its complement whichever is the smaller should be taken here as the given sine. Otherwise the terms obtained by this above iteration will not tend to the vanishing magnitude.[17] This yields: $r\theta ={\frac {r\sin \theta }{\cos \theta }}-(1/3)\,r\,{\frac {\left(\sin \theta \right)^{3}}{\left(\cos \theta \right)^{3}}}+(1/5)\,r\,{\frac {\left(\sin \theta \right)^{5}}{\left(\cos \theta \right)^{5}}}-(1/7)\,r\,{\frac {\left(\sin \theta \right)^{7}}{\left(\cos \theta \right)^{7}}}+\cdots $ or equivalently: $\theta =\tan \theta -{\frac {\tan ^{3}\theta }{3}}+{\frac {\tan ^{5}\theta }{5}}-{\frac {\tan ^{7}\theta }{7}}+\cdots $ This series is Gregory's series (named after James Gregory, who rediscovered it three centuries after Madhava). Even if we consider this particular series as the work of Jyeṣṭhadeva, it would pre-date Gregory by a century, and certainly other infinite series of a similar nature had been worked out by Madhava. Today, it is referred to as the Madhava-Gregory-Leibniz series.[17][18] Trigonometry Madhava composed an accurate table of sines. Madhava's values are accurate to the seventh decimal place. Marking a quarter circle at twenty-four equal intervals, he gave the lengths of the half-chord (sines) corresponding to each of them. It is believed that he may have computed these values based on the series expansions:[4] sin q = q − q3/3! + q5/5! − q7/7! + ... cos q = 1 − q2/2! + q4/4! − q6/6! + ... The value of π (pi) Main article: Madhava's correction term Madhava's work on the value of the mathematical constant Pi is cited in the Mahajyānayana prakāra ("Methods for the great sines"). While some scholars such as Sarma[10] feel that this book may have been composed by Madhava himself, it is more likely the work of a 16th-century successor.[4] This text attributes most of the expansions to Madhava, and gives the following infinite series expansion of π, now known as the Madhava-Leibniz series:[19][20] ${\frac {\pi }{4}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots =\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{2n-1}},$ which he obtained from the power-series expansion of the arc-tangent function. However, what is most impressive is that he also gave a correction term Rn for the error after computing the sum up to n terms,[4] namely: Rn = (−1)n / (4n), or Rn = (−1)n⋅n / (4n2 + 1), or Rn = (−1)n⋅(n2 + 1) / (4n3 + 5n), where the third correction leads to highly accurate computations of π. It has long been speculated how Madhava found these correction terms.[21] They are the first three convergents of a finite continued fraction, which, when combined with the original Madhava's series evaluated to n terms, yields about 3n/2 correct digits: ${\frac {\pi }{4}}\approx 1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots +{\frac {(-1)^{n-1}}{2n-1}}+{\cfrac {(-1)^{n}}{4n+{\cfrac {1^{2}}{n+{\cfrac {2^{2}}{4n+{\cfrac {3^{2}}{n+{\cfrac {4^{2}}{\dots +{\cfrac {\dots }{\dots +{\cfrac {n^{2}}{n[4-3(n{\bmod {2}})]}}}}}}}}}}}}}}.$ The absolute value of the correction term in next higher order is |Rn| = (4n3 + 13n) / (16n4 + 56n2 + 9). He also gave a more rapidly converging series by transforming the original infinite series of π, obtaining the infinite series $\pi ={\sqrt {12}}\left(1-{\frac {1}{3\cdot 3}}+{\frac {1}{5\cdot 3^{2}}}-{\frac {1}{7\cdot 3^{3}}}+\cdots \right).$ By using the first 21 terms to compute an approximation of π, he obtains a value correct to 11 decimal places (3.14159265359).[22] The value of 3.1415926535898, correct to 13 decimals, is sometimes attributed to Madhava,[23] but may be due to one of his followers. These were the most accurate approximations of π given since the 5th century (see History of numerical approximations of π). The text Sadratnamala appears to give the astonishingly accurate value of π = 3.14159265358979324 (correct to 17 decimal places). Based on this, R. Gupta has suggested that this text was also composed by Madhava.[3][22] Madhava also carried out investigations into other series for arc lengths and the associated approximations to rational fractions of π, found methods of polynomial expansion, discovered tests of convergence of infinite series, and the analysis of infinite continued fractions.[3] He also discovered the solutions of transcendental equations by iteration and found the approximation of transcendental numbers by continued fractions.[3] Calculus Madhava laid the foundations for the development of calculus, which were further developed by his successors at the Kerala school of astronomy and mathematics.[15][24] (Certain ideas of calculus were known to earlier mathematicians.) Madhava also extended some results found in earlier works, including those of Bhāskara II.[24] However, they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the powerful problem-solving tool we have today.[25] Madhava's works K. V. Sarma has identified Madhava as the author of the following works:[26][27] 1. Golavada 2. Madhyamanayanaprakara 3. Mahajyanayanaprakara (Method of Computing Great Sines) 4. Lagnaprakarana (लग्नप्रकरण) 5. Venvaroha (वेण्वारोह)[28] 6. Sphuṭacandrāpti (स्फुटचन्द्राप्ति) 7. Aganita-grahacara (अगणित-ग्रहचार) 8. Chandravakyani (चन्द्रवाक्यानि) (Table of Moon-mnemonics) Kerala School of Astronomy and Mathematics Main article: Kerala school of astronomy and mathematics The Kerala school of astronomy and mathematics flourished for at least two centuries beyond Madhava. In Jyeṣṭhadeva we find the notion of integration, termed sankalitam, (lit. collection), as in the statement: ekadyekothara pada sankalitam samam padavargathinte pakuti,[18] which translates as the integral of a variable (pada) equals half that variable squared (varga); i.e. The integral of x dx is equal to x2 / 2. This is clearly a start to the process of integral calculus. A related result states that the area under a curve is its integral. Most of these results pre-date similar results in Europe by several centuries. In many senses, Jyeshthadeva's Yuktibhāṣā may be considered the world's first calculus text.[9][15][24] The group also did much other work in astronomy; indeed many more pages are developed to astronomical computations than are for discussing analysis related results.[10] The Kerala school also contributed much to linguistics (the relation between language and mathematics is an ancient Indian tradition, see Kātyāyana). The ayurvedic and poetic traditions of Kerala can also be traced back to this school. The famous poem, Narayaniyam, was composed by Narayana Bhattathiri. Influence Madhava has been called "the greatest mathematician-astronomer of medieval India",[3] or as "the founder of mathematical analysis; some of his discoveries in this field show him to have possessed extraordinary intuition".[29] O'Connor and Robertson state that a fair assessment of Madhava is that he took the decisive step towards modern classical analysis.[4] Possible propagation to Europe The Kerala school was well known in the 15th and 16th centuries, in the period of the first contact with European navigators in the Malabar Coast. At the time, the port of Muziris, near Sangamagrama, was a major center for maritime trade, and a number of Jesuit missionaries and traders were active in this region. Given the fame of the Kerala school, and the interest shown by some of the Jesuit groups during this period in local scholarship, some scholars, including G. Joseph of the U. Manchester have suggested[30] that the writings of the Kerala school may have also been transmitted to Europe around this time, which was still about a century before Newton.[31] See also • Madhava Observatory • Madhava's sine table • Madhava series • Madhava's correction term • Venvaroha • Yuktibhāṣā • Kerala school of astronomy and mathematics • List of astronomers and mathematicians of the Kerala school • List of Indian mathematicians • Indian mathematics • History of calculus References 1. C. T. Rajagopal & M.S.Rangachari (1978). "On an Untapped Source of Medieval Keralese Mathematics". Archive for History of Exact Sciences. 18 (2): 101. doi:10.1007/BF00348142. S2CID 51861422. 2. Roy, Ranjan (1990). "The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha" (PDF). Mathematics Magazine. 63 (5): 291–306. doi:10.2307/2690896. JSTOR 2690896. Archived from the original (PDF) on 24 February 2012. 3. Ian G. Pearce (2002). Madhava of Sangamagramma. MacTutor History of Mathematics archive. University of St Andrews. 4. J. J. O'Connor and E. F. Robertson (2000). "Madhava of Sangamagramma". MacTutor History of Mathematics archive. School of Mathematics and Statistics, University of St Andrews, Scotland. Archived from the original on 14 May 2006. Retrieved 8 September 2007. 5. K. V. Sarma (1972). "A History of the Kerala School of Hindu Astronomy (in perspective). Hoshiarpur: Vishveshvaranand Institute of Sanskrit & Indological Studies, Panjab University. p. 51. Available 6. K. V. Sarma (1973). Computation of the True Moon by Madhava of sangamagrama. Hoshiarpur: Vishveshvaranand Institute of Sanskrit and Indological Studies, Panjab University. p. 12. Available: (Accessed on 1 January 2023) 7. P. P. Divakaran (2018). The Mathematics of India: Concepts, Methods, Connections. Cochin: Springer - Hindustan Book Agency. pp. 282–290. ISBN 978-981-13-1773-6. 8. K. V. Sarma (1973). Sputachandrapti: Computation of the True Moon by Madhava of Sangamagrama. Hoshiarpur, Punjab: Vishveshvaranand Institute of Sanskrit and Indological Studies, Panjab University. p. 8. 9. Charles Whish (1834). "On the Hindu Quadrature of the circle and the infinite series of the proportion of the circumference to the diameter exhibited in the four Sastras, the Tantra Sahgraham, Yucti Bhasha, Carana Padhati and Sadratnamala". Transactions of the Royal Asiatic Society of Great Britain and Ireland. Royal Asiatic Society of Great Britain and Ireland. 3 (3): 509–523. doi:10.1017/S0950473700001221. JSTOR 25581775. 10. K. V. Sarma; S. Hariharan (eds.). "A book on rationales in Indian Mathematics and Astronomy—An analytic appraisal" (PDF). Yuktibhāṣā of Jyeṣṭhadeva. Archived from the original (PDF) on 28 September 2006. Retrieved 9 July 2006. 11. A.P. Jushkevich (1961). Geschichte der Mathematik im Mittelalter (German translation, Leipzig, 1964, of the Russian original, Moscow, 1961). Moscow.{{cite book}}: CS1 maint: location missing publisher (link) 12. K V Sarma (1972). A History of the Kerala School of Hindu Astronomy. Hoshiarpur.{{cite book}}: CS1 maint: location missing publisher (link) 13. Purananuru 229 14. Madhava extended Archimedes' work on the geometric Method of Exhaustion to measure areas and numbers such as π, with arbitrary accuracy and error limits, to an algebraic infinite series with a completely separate error term. C T Rajagopal and M S Rangachari (1986). "On medieval Keralese mathematics". Archive for History of Exact Sciences. 35 (2): 91–99. doi:10.1007/BF00357622. S2CID 121678430. 15. "Neither Newton nor Leibniz – The Pre-History of Calculus and Celestial Mechanics in Medieval Kerala". MAT 314. Canisius College. Archived from the original on 6 August 2006. Retrieved 9 July 2006. 16. "The Kerala School, European Mathematics and Navigation". Indian Mathemematics. D.P. Agrawal—Infinity Foundation. Retrieved 9 July 2006. 17. R C Gupta (1973). "The Madhava-Gregory series". Math. Education. 7: B67–B70. 18. "Science and technology in free India" (PDF). Government of Kerala—Kerala Call, September 2004. Prof. C.G.Ramachandran Nair. Archived from the original (PDF) on 21 August 2006. Retrieved 9 July 2006. 19. George E. Andrews, Richard Askey, Ranjan Roy (1999). Special Functions. Cambridge University Press. p. 58. ISBN 0-521-78988-5. 20. Gupta, R. C. (1992). "On the remainder term in the Madhava-Leibniz's series". Ganita Bharati. 14 (1–4): 68–71. 21. T. Hayashi, T. Kusuba and M. Yano. "The correction of the Madhava series for the circumference of a circle", Centaurus 33 (pages 149–174). 1990. 22. R. C. Gupta (1975). "Madhava's and other medieval Indian values of pi". Math. Education. 9 (3): B45–B48. 23. The 13-digit accurate value of π, 3.1415926535898, can be reached using the infinite series expansion of π/4 (the first sequence) by going up to n = 76. 24. "An overview of Indian mathematics". Indian Maths. School of Mathematics and Statistics University of St Andrews, Scotland. Retrieved 7 July 2006. 25. Katz, Victor J. (1 June 1995). "Ideas of Calculus in Islam and India". Mathematics Magazine. 68 (3): 163–174. doi:10.1080/0025570X.1995.11996307. ISSN 0025-570X. 26. Sarma, K. V. (1977). Contributions to the study of Kerala school of Hindu astronomy and mathematics. Hoshiarpur: V V R I. 27. David Edwin Pingree (1981). Census of the exact sciences in Sanskrit. A. Vol. 4. Philadelphia: American Philosophical Society. pp. 414–415. 28. K. Chandra Hari (2003). "Computation of the true moon by Madhva of Sangamagrama". Indian Journal of History of Science. 38 (3): 231–253. Retrieved 27 January 2010. 29. Joseph, George Gheverghese (October 2010) [1991]. The Crest of the Peacock: Non-European Roots of Mathematics (3rd ed.). Princeton University Press. ISBN 978-0-691-13526-7. 30. "Indians predated Newton 'discovery' by 250 years". press release, University of Manchester. 13 August 2007. Archived from the original on 21 March 2008. Retrieved 5 September 2007. 31. D F Almeida, J K John and A Zadorozhnyy (2001). "Keralese mathematics: its possible transmission to Europe and the consequential educational implications". Journal of Natural Geometry. 20 (1): 77–104. External links • Biography on MacTutor Indian mathematics Mathematicians Ancient • Apastamba • Baudhayana • Katyayana • Manava • Pāṇini • Pingala • Yajnavalkya Classical • Āryabhaṭa I • Āryabhaṭa II • Bhāskara I • Bhāskara II • Melpathur Narayana Bhattathiri • Brahmadeva • Brahmagupta • Govindasvāmi • Halayudha • Jyeṣṭhadeva • Kamalakara • Mādhava of Saṅgamagrāma • Mahāvīra • Mahendra Sūri • Munishvara • Narayana • Parameshvara • Achyuta Pisharati • Jagannatha Samrat • Nilakantha Somayaji • Śrīpati • Sridhara • Gangesha Upadhyaya • Varāhamihira • Sankara Variar • Virasena Modern • Shanti Swarup Bhatnagar Prize recipients in Mathematical Science Treatises • Āryabhaṭīya • Bakhshali manuscript • Bijaganita • Brāhmasphuṭasiddhānta • Ganita Kaumudi • Karanapaddhati • Līlāvatī • Lokavibhaga • Paulisa Siddhanta • Paitamaha Siddhanta • Romaka Siddhanta • Sadratnamala • Siddhānta Shiromani • Śulba Sūtras • Surya Siddhanta • Tantrasamgraha • Vasishtha Siddhanta • Veṇvāroha • Yuktibhāṣā • Yavanajataka Pioneering innovations • Brahmi numerals • Hindu–Arabic numeral system • Symbol for zero (0) • Infinite series expansions for the trigonometric functions Centres • Kerala school of astronomy and mathematics • Jantar Mantar (Jaipur, New Delhi, Ujjain, Varanasi) Historians of mathematics • Bapudeva Sastri (1821–1900) • Shankar Balakrishna Dikshit (1853–1898) • Sudhakara Dvivedi (1855–1910) • M. Rangacarya (1861–1916) • P. C. Sengupta (1876–1962) • B. B. Datta (1888–1958) • T. Hayashi • A. A. Krishnaswamy Ayyangar (1892– 1953) • A. N. Singh (1901–1954) • C. T. Rajagopal (1903–1978) • T. A. Saraswati Amma (1918–2000) • S. N. Sen (1918–1992) • K. S. Shukla (1918–2007) • K. V. Sarma (1919–2005) Translators • Walter Eugene Clark • David Pingree Other regions • Babylon • China • Greece • Islamic mathematics • Europe Modern institutions • Indian Statistical Institute • Bhaskaracharya Pratishthana • Chennai Mathematical Institute • Institute of Mathematical Sciences • Indian Institute of Science • Harish-Chandra Research Institute • Homi Bhabha Centre for Science Education • Ramanujan Institute for Advanced Study in Mathematics • TIFR Authority control International • VIAF National • Germany Academics • zbMATH Other • IdRef
Wikipedia
Samir Saker Samir H Saker is an Egyptian professor of mathematics at the Department of Mathematics, Faculty of Science, Mansoura University, Egypt. He is the Manager of IT Unit, Faculty of Science, Mansoura University, an elected member of American Mathematical Society and European Mathematical Society.[1][2][3] Samir H Saker Born (1971-01-11) January 11, 1971 Egypt NationalityEgyptian Alma materMansoura University Adam Mickiewicz University Occupation(s)Academician, Researcher Early life and education Samir Saker was born January 11, 1971. He obtained his B. Sc and M. SC in Mathematics from Mansoura University, Egypt in 1993 and 1997 respectively. He won a scholarship to pursue his PhD at Adam Mickiewicz University, Poznan, Poland and graduated in 2002.[2][4][1] Career Samir Saker began his career as a demonstrator at Department of Mathematics, Faculty of Science, Mansoura till professorship in the same University. A year after his B. SC (1994), he was appointed as a demonstrator of mathematics . He became assistant lecturer in the same institution a year after his M. Sc (1998). He became a full lecturer in 2003, associate Professor of Mathematics in 2008 and In 2013, he became a professor.[4][1][2] Awards and memberships Samir Saker is an elected Member of American Mathematical Society, European Mathematical Society, Egyptian Mathematical Society, International Society of Difference Equations and Member of African Mathematical Union, AMU.[2] In 2003, he won the Shoman Award for Young Arab Scientists in Jordan. He won USA Fulbright scholarship to Trinity University in 2004. He won the Egyptian National State Prize award twice; 2005 and 2014 and Amin Lotfy Award in Mathematics in 2009.[3][2] References 1. Samir, Saker. "curriculum vitae" (PDF). 2. "Samir Saker". Galala University (in Arabic). Retrieved 2022-06-18. 3. "Editorial Team | Open Journal of Engineering Science (ISSN: 2734-2115)". www.openjournalsnigeria.org.ng. Retrieved 2022-06-18. 4. "Samir H. Saker - Mathematician of the African Diaspora". www.math.buffalo.edu. Retrieved 2022-06-18. Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Samit Dasgupta Samit Dasgupta is a professor of mathematics at Duke University working in algebraic number theory. Samit Dasgupta Alma materHarvard University University of California, Berkeley AwardsSloan Research Fellowship (2009) Scientific career FieldsMathematics InstitutionsDuke University ThesisGross-Stark Units, Stark-Heegner Points, and Class Fields of Real Quadratic Fields (2004) Doctoral advisorKen Ribet Henri Darmon Biography Dasgupta graduated from Montgomery Blair High School in 1995 and placed fourth in the 1995 Westinghouse Science Talent Search with a project on Schinzel's hypothesis H.[1] He then attended Harvard University, where he received a bachelor's degree in 1999.[1][2] In 2004, Dasgupta received a PhD in mathematics from University of California, Berkeley under the supervision of Ken Ribet and Henri Darmon.[3] Dasgupta was previously a faculty member at University of California, Santa Cruz.[1] As of 2020, he is a professor of mathematics at Duke University.[2][4] Research Dasgupta's research is focused on special values of L-functions, algebraic points on abelian varieties, and units in number fields.[5] In particular, Dasgupta's research has focused on the Stark conjectures and Heegner points.[3][6][7][8] Awards In 2009, Dasgupta received a Sloan Research Fellowship.[5] He was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to number theory, in particular the theory of special values of classical and p-adic L-functions".[9] Selected publications • Darmon, Henri; Dasgupta, Samit (2006). "Elliptic units for real quadratic fields". Annals of Mathematics. 163 (1): 301–346. doi:10.4007/annals.2006.163.301. ISSN 0003-486X. • Dasgupta, Samit; Darmon, Henri; Pollack, Robert (2011). "Hilbert modular forms and the Gross-Stark conjecture". Annals of Mathematics. 174 (1): 439–484. doi:10.4007/annals.2011.174.1.12. ISSN 0003-486X. • Dasgupta, Samit; Kakde, Mahesh; Ventullo, Kevin (2018). "On the Gross–Stark Conjecture". Annals of Mathematics. 188 (3): 833. doi:10.4007/annals.2018.188.3.3. JSTOR 10.4007/annals.2018.188.3.3. S2CID 53554124. • Dasgupta, Samit; Spieß, Michael (2018). "Partial zeta values, Gross's tower of fields conjecture, and Gross–Stark units". Journal of the European Mathematical Society. 20 (11): 2643–2683. doi:10.4171/JEMS/821. ISSN 1435-9855. • Dasgupta, Samit (2016). "Factorization of p-adic Rankin L-series". Inventiones Mathematicae. 205 (1): 221–268. Bibcode:2016InMat.205..221D. doi:10.1007/s00222-015-0634-4. ISSN 0020-9910. S2CID 17144046. References 1. "Samit Dasgupta '95: Algebraic Number Theory". Montgomery Blair High School Magnet Foundation. Fall 2015. Archived from the original on November 3, 2020. Retrieved August 4, 2020. 2. "Samit Dasgupta". Duke University Department of Mathematics. Retrieved August 4, 2020. 3. Samit Dasgupta at the Mathematics Genealogy Project 4. "Samit Dasgupta". Duke University. Retrieved August 4, 2020. 5. "UC Santa Cruz Mathematician Samit Dasgupta Awarded Sloan Research Fellowship". Mathematical Association of America. February 20, 2009. Retrieved August 4, 2020. 6. Dasgupta, Samit; Darmon, Henri; Pollack, Robert (2011). "Hilbert modular forms and the Gross-Stark conjecture". Annals of Mathematics. 174 (1): 439–484. doi:10.4007/annals.2011.174.1.12. ISSN 0003-486X. 7. Dasgupta, Samit; Kakde, Mahesh; Ventullo, Kevin (2018). "On the Gross–Stark Conjecture". Annals of Mathematics. 188 (3): 833. doi:10.4007/annals.2018.188.3.3. JSTOR 10.4007/annals.2018.188.3.3. S2CID 53554124. 8. Dasgupta, Samit; Spieß, Michael (2018). "Partial zeta values, Gross's tower of fields conjecture, and Gross–Stark units". Journal of the European Mathematical Society. 20 (11): 2643–2683. doi:10.4171/JEMS/821. ISSN 1435-9855. 9. "2022 Class of Fellows of the AMS". American Mathematical Society. Retrieved 2021-11-05. External links • Samit Dasgupta at the Mathematics Genealogy Project Authority control: Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Euclidean minimum spanning tree A Euclidean minimum spanning tree of a finite set of points in the Euclidean plane or higher-dimensional Euclidean space connects the points by a system of line segments with the points as endpoints, minimizing the total length of the segments. In it, any two points can reach each other along a path through the line segments. It can be found as the minimum spanning tree of a complete graph with the points as vertices and the Euclidean distances between points as edge weights. The edges of the minimum spanning tree meet at angles of at least 60°, at most six to a vertex. In higher dimensions, the number of edges per vertex is bounded by the kissing number of tangent unit spheres. The total length of the edges, for points in a unit square, is at most proportional to the square root of the number of points. Each edge lies in an empty region of the plane, and these regions can be used to prove that the Euclidean minimum spanning tree is a subgraph of other geometric graphs including the relative neighborhood graph and Delaunay triangulation. By constructing the Delaunay triangulation and then applying a graph minimum spanning tree algorithm, the minimum spanning tree of $n$ given planar points may be found in time $O(n\log n)$, as expressed in big O notation. This is optimal in some models of computation, although faster randomized algorithms exist for points with integer coordinates. For points in higher dimensions, finding an optimal algorithm remains an open problem. Definition and related problems A Euclidean minimum spanning tree, for a set of $n$ points in the Euclidean plane or Euclidean space, is a system of line segments, having only the given points as their endpoints, whose union includes all of the points in a connected set, and which has the minimum possible total length of any such system. Such a network cannot contain a polygonal ring of segments; if one existed, the network could be shortened by removing an edge of the polygon. Therefore, the minimum-length network forms a tree. This observation leads to the equivalent definition that a Euclidean minimum spanning tree is a tree of line segments between pairs of the given points, of minimum total length.[1] The same tree may also be described as a minimum spanning tree of a weighted complete graph, having the given points as its vertices and the distances between points as edge weights.[2] The same points may have more than one minimum spanning tree. For instance, for the vertices of a regular polygon, removing any edge of the polygon produces a minimum spanning tree.[3] Publications on the Euclidean minimum spanning tree commonly abbreviate it as "EMST".[4][5] They may also be called "geometric minimum spanning trees",[6][7] but that term may be used more generally for geometric spaces with non-Euclidean distances, such as Lp spaces.[8] When the context of Euclidean point sets is clear, they may be called simply "minimum spanning trees".[9][10][11] Several other standard geometric networks are closely related to the Euclidean minimum spanning tree: • The Steiner tree problem again seeks a system of line segments connecting all given points, but without requiring the segments to start and end only at given points. In this problem, additional points may be added as segment endpoints, allowing the Steiner tree to be shorter than the minimum spanning tree.[12] • In the Euclidean traveling salesperson path problem, the connecting line segments must start and end at the given points, like the spanning tree and unlike the Steiner tree; additionally, each point can touch at most two line segments, so the result forms a polygonal chain. Because of this restriction, the optimal path may be longer than the Euclidean minimum spanning tree, but is at most twice as long.[2] • Geometric spanners are low-weight networks that, like the minimum spanning tree, connect all of the points. Unlike the minimum spanning tree, all of these connecting paths are required to be short, having length proportional to the distance between the points they connect. To achieve this property, these networks generally have cycles and so are not trees.[13] Properties Angles and vertex degrees Whenever two edges of a Euclidean minimum spanning tree meet at a vertex, they must form an angle of 60° or more, with equality only when they form two sides of an equilateral triangle. This is because, for two edges forming any sharper angle, one of the two edges could be replaced by the third, shorter edge of the triangle they form, forming a tree with smaller total length.[14] In comparison, the Steiner tree problem has a stronger angle bound: an optimal Steiner tree has all angles at least 120°.[12] The same 60° angle bound also occurs in the kissing number problem, of finding the maximum number of unit spheres in Euclidean space that can be tangent to a central unit sphere without any two spheres intersecting (beyond a point of tangency). The center points of these spheres have a minimum spanning tree in the form of a star, with the central point adjacent to all other points. Conversely, for any vertex $v$ of any minimum spanning tree, one can construct non-overlapping unit spheres centered at $v$ and at points two units along each of its edges, with a tangency for each neighbor of $v$. Therefore, in $n$-dimensional space the maximum possible degree of a vertex (the number of spanning tree edges connected to it) equals the kissing number of spheres in $n$ dimensions.[15] Planar minimum spanning trees have degree at most six, and when a tree has degree six there is always another minimum spanning tree with maximum degree five.[7] Three-dimensional minimum spanning trees have degree at most twelve.[15] The only higher dimensions in which the exact value of the kissing number is known are four, eight, and 24 dimensions.[16] For points generated at random from a given continuous distribution, the minimum spanning tree is almost surely unique. The numbers of vertices of any given degree converge, for large number of vertices, to a constant times that number of vertices. The values of these constants depend on the degree and the distribution. However, even for simple cases—such as the number of leaves for points uniformly distributed in a unit square—their precise values are not known.[17] Empty regions For any edge $uv$ of any Euclidean minimum spanning tree, the lens (or vesica piscis) formed by intersecting the two circles with $uv$ as their radii cannot have any other given vertex $w$ in its interior. Put another way, if any tree has an edge $uv$ whose lens contains a third point $w$, then it is not of minimum length. For, by the geometry of the two circles, $w$ would be closer to both $u$ and $v$ than they are to each other. If edge $uv$ were removed from the tree, $w$ would remain connected to one of $u$ and $v$, but not the other. Replacing the removed edge $uv$ by $uw$ or $vw$ (whichever of these two edges reconnects $w$ to the vertex from which it was disconnected) would produce a shorter tree.[12] For any edge $uv$ of any Euclidean minimum spanning tree, the rhombus with angles of 60° and 120°, having $uv$ as its long diagonal, is disjoint from the rhombi formed analogously by all other edges. Two edges sharing an endpoint cannot have overlapping rhombi, because that would imply an edge angle sharper than 60°, and two disjoint edges cannot have overlapping rhombi; if they did, the longer of the two edges could be replaced by a shorter edge among the same four vertices.[12] Supergraphs Certain geometric graphs have definitions involving empty regions in point sets, from which it follows that they contain every edge that can be part of a Euclidean minimum spanning tree. These include: • The relative neighborhood graph, which has an edge between any pair of points whenever the lens they define is empty. • The Gabriel graph, which has an edge between any pair of points whenever the circle having the pair as a diameter is empty. • The Delaunay triangulation, which has an edge between any pair of points whenever there exists an empty circle having the pair as a chord. • The Urquhart graph, formed from the Delaunay triangulation by removing the longest edge of each triangle. For each remaining edge, the vertices of the Delaunay triangles that use that edge cannot lie within the empty lune of the relative neighborhood graph. Because the empty-region criteria for these graphs are progressively weaker, these graphs form an ordered sequence of subgraphs. That is, using "⊆" to denote the subset relationship among their edges, these graphs have the relations: Euclidean minimum spanning tree ⊆ relative neighborhood graph ⊆ Urquhart graph ⊆ Gabriel graph ⊆ Delaunay triangulation.[18][19] Another graph guaranteed to contain the minimum spanning tree is the Yao graph, determined for points in the plane by dividing the plane around each point into six 60° wedges and connecting each point to the nearest neighbor in each wedge. The resulting graph contains the relative neighborhood graph, because two vertices with an empty lens must be the nearest neighbors to each other in their wedges. As with many of the other geometric graphs above, this definition can be generalized to higher dimensions, and (unlike the Delaunay triangulation) its generalizations always include a linear number of edges.[20][21] Total length For $n$ points in the unit square (or any other fixed shape), the total length of the minimum spanning tree edges is $O({\sqrt {n}})$. Some sets of points, such as points evenly spaced in a ${\sqrt {n}}\times {\sqrt {n}}$ grid, attain this bound.[12] For points in a unit hypercube in $d$-dimensional space, the corresponding bound is $O(n^{(d-1)/d})$.[22] The same bound applies to the expected total length of the minimum spanning tree for $n$ points chosen uniformly and independently from a unit square or unit hypercube.[23] Returning to the unit square, the sum of squared edge lengths of the minimum spanning tree is $O(1)$. This bound follows from the observation that the edges have disjoint rhombi, with area proportional to the edge lengths squared. The $O({\sqrt {n}})$ bound on total length follows by application of the Cauchy–Schwarz inequality.[12] Another interpretation of these results is that the average edge length for any set of points in a unit square is $O(1/{\sqrt {n}})$, at most proportional to the spacing of points in a regular grid; and that for random points in a unit square the average length is proportional to $1/{\sqrt {n}}$. However, in the random case, with high probability the longest edge has length approximately ${\sqrt {\frac {\log n}{\pi n}}},$ longer than the average by a non-constant factor. With high probability, the longest edge forms a leaf of the spanning tree, and connects a point far from all the other points to its nearest neighbor. For large numbers of points, the distribution of the longest edge length around its expected value converges to a Laplace distribution.[24] Any geometric spanner, a subgraph of a complete geometric graph whose shortest paths approximate the Euclidean distance, must have total edge length at least as large as the minimum spanning tree, and one of the standard quality measures for a geometric spanner is the ratio between its total length and of the minimum spanning tree for the same points. Several methods for constructing spanners, such as the greedy geometric spanner, achieve a constant bound for this ratio.[13] It has been conjectured that the Steiner ratio, the largest possible ratio between the total length of a minimum spanning tree and Steiner tree for the same set of points in the plane, is $2/{\sqrt {3}}\approx 1.1547$, the ratio for three points in an equilateral triangle.[12] Subdivision If every edge of a Euclidean minimum spanning tree is subdivided, by adding a new point at its midpoint, then the resulting tree is still a minimum spanning tree of the augmented point set. Repeating this subdivision process allows a Euclidean minimum spanning tree to be subdivided arbitrarily finely. However, subdividing only some of the edges, or subdividing the edges at points other than the midpoint, may produce a point set for which the subdivided tree is not the minimum spanning tree.[25] Computational complexity For points in any dimension, the minimum spanning tree can be constructed in time $O(n^{2})$ by constructing a complete graph with an edge between every pair of points, weighted by Euclidean distance, and then applying a graph minimum spanning tree algorithm such as the Prim–Dijkstra–Jarník algorithm or Borůvka's algorithm on it. These algorithms can be made to take time $O(n^{2})$ on complete graphs, unlike another common choice, Kruskal's algorithm, which is slower because it involves sorting all distances.[13] For points in low-dimensional spaces, the problem may be solved more quickly, as detailed below. Computing Euclidean distances involves a square root calculation. In any comparison of edge weights, comparing the squares of the Euclidean distances, instead of the distances themselves, yields the same ordering, and so does not change the rest of the tree's computation. This shortcut speeds up calculation and allows a minimum spanning tree for points with integer coordinates to be constructed using only integer arithmetic.[20] Two dimensions A faster approach to finding the minimum spanning tree of planar points uses the property that it is a subgraph of the Delaunay triangulation: 1. Compute the Delaunay triangulation, which can be done in $O(n\log n)$ time. Because the Delaunay triangulation is a planar graph, it has at most $3n-6$ edges. 2. Label each edge with its (squared) length. 3. Run a graph minimum spanning tree algorithm. Since there are $O(n)$ edges, this requires $O(n\log n)$ time using any of the standard minimum spanning tree algorithms. The result is an algorithm taking $O(n\log n)$ time,[2] optimal in certain models of computation (see below). If the input coordinates are integers and can be used as array indices, faster algorithms are possible: the Delaunay triangulation can be constructed by a randomized algorithm in $O(n\log \log n)$ expected time.[26] Additionally, since the Delaunay triangulation is a planar graph, its minimum spanning tree can be found in linear time by a variant of Borůvka's algorithm that removes all but the cheapest edge between each pair of components after each stage of the algorithm.[13][27] Therefore, the total expected time for this algorithm is $O(n\log \log n)$.[26] In the other direction, the Delaunay triangulation can be constructed from the minimum spanning tree in the near-linear time bound $O(n\log ^{*}n)$, where $\log ^{*}$ denotes the iterated logarithm.[28] Higher dimensions The problem can also be generalized to $n$ points in the $d$-dimensional space $\mathbb {R} ^{d}$. In higher dimensions, the connectivity determined by the Delaunay triangulation (which, likewise, partitions the convex hull into $d$-dimensional simplices) contains the minimum spanning tree; however, the triangulation might contain the complete graph.[4] Therefore, finding the Euclidean minimum spanning tree as a spanning tree of the complete graph or as a spanning tree of the Delaunay triangulation both take $O(dn^{2})$ time. For three dimensions the minimum spanning tree can be found in time $O{\bigl (}(n\log n)^{4/3}{\bigr )}$, and in any greater dimension, in time $O\left(n^{2-{\frac {2}{\lceil d/2\rceil +1}}+\varepsilon }\right)$ for any $\varepsilon >0$—faster than the quadratic time bound for the complete graph and Delaunay triangulation algorithms.[4] The optimal time complexity for higher-dimensional minimum spanning trees remains unknown,[29] but is closely related to the complexity of computing bichromatic closest pairs. In the bichromatic closest pair problem, the input is a set of points, given two different colors (say, red and blue). The output is a pair of a red point and a blue point with the minimum possible distance. This pair always forms one of the edges in the minimum spanning tree. Therefore, the bichromatic closest pair problem can be solved in the amount of time that it takes to construct a minimum spanning tree and scan its edges for the shortest red–blue edge. Conversely, for any red–blue coloring of any subset of a given set of points, the bichromatic closest pair produces one edge of the minimum spanning tree of the subset. By carefully choosing a sequence of colorings of subsets, and finding the bichromatic closest pair of each subproblem, the minimum spanning tree may be found in time proportional to the optimal time for finding bichromatic closest pairs for the same number of points, whatever that optimal time turns out to be.[4][11] For uniformly random point sets in any bounded dimension, the Yao graph[20] or Delaunay triangulation have linear expected numbers of edges, are guaranteed to contain the minimum spanning tree, and can be constructed in linear expected time.[21][6][30] From these graphs, the minimum spanning tree itself may be constructed in linear time, by using a randomized linear time algorithm for graph minimum spanning trees.[31] However, the poor performance of these methods on inputs coming from clustered data has led algorithm engineering researchers to develop methods with a somewhat slower $O(n\log n)$ time bound, for random inputs or inputs whose distances and clustering resemble those of random data, while exhibiting better performance on real-world data.[8][32][5] A well-separated pair decomposition is a family of pairs of subsets of the given points, so that every pair of points belong to one of these pairs of subsets, and so that all pairs of points coming from the same pair of subsets have approximately the same length. It is possible to find a well-separated pair decomposition with a linear number of subsets, and a representative pair of points for each subset, in time $O(n\log n)$. The minimum spanning tree of the graph formed by these representative pairs is then an approximation to the minimum spanning tree. Using these ideas, a $(1+\varepsilon )$-approximation to the minimum spanning tree may be found in $O(n\log n)$ time, for constant $\varepsilon $. More precisely, by choosing each representative pair to approximate the closest pair in its equivalence class, and carefully varying the quality of this approximation for different pairs, the dependence on $\varepsilon $ in the time bound can be given as $O(n\log n+(\varepsilon ^{-2}\log ^{2}{\tfrac {1}{\varepsilon }})n),$ for any fixed dimension.[33] Dynamic and kinetic The Euclidean minimum spanning tree has been generalized in many different ways to systems of moving or changing points: • If a set of points undergoes a sequence of dynamic insertions or deletions of points, each of these updates induces a bounded amount of change to the minimum spanning tree of the points. When the update sequence is known in advance, for points in the plane, the change after each insertion or deletion can be found in time $O(\log ^{2}n)$ per insertion or deletion.[34] When the updates must be handled in an online manner, a slower (but still poly-logarithmic) $O(\log ^{10}n)$ time bound is known.[35] For higher-dimensional versions of the problem the time per update is slower, but still sublinear.[36] • For Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): n points moving linearly with constant speed, or with more general algebraic motions, the minimum spanning tree will change by a sequence of swaps, in which one edge is removed and another replaces it at a point in time where both have equal length.[37] For linear motions, the number of changes is at most slightly larger than $n^{25/9}$.[38] For more general algebraic motions, there is a near-cubic upper bound on the number of swaps, based on the theory of Davenport–Schinzel sequences.[39] • The minimum moving spanning tree problem again concerns points moving linearly with constant speed, over an interval of time, and seeks a single tree that minimizes the maximum sum of weights occurring at any instant during this interval. It is NP-hard to compute exactly, but can be approximated to within a factor of two in polynomial time.[40] • The kinetic Euclidean minimum spanning tree problem asks for a kinetic data structure that can maintain the minimum spanning tree as its points undergo both continuous motions and insertions and deletions. Several papers have studied such structures,[41][42][43][44][45] and a kinetic structure for algebraically moving points with near-cubic total time, nearly matching the bound on the number of swaps, is known.[44] Lower bound An asymptotic lower bound of $\Omega (n\log n)$ of the Euclidean minimum spanning tree problem can be established in restricted models of computation. These include the algebraic decision tree and algebraic computation tree models, in which the algorithm has access to the input points only through certain restricted primitives that perform simple algebraic computations on their coordinates. In these models, the closest pair of points problem requires $\Omega (n\log n)$ time, but the closest pair is necessarily an edge of the minimum spanning tree, so the minimum spanning tree also requires this much time. Therefore, algorithms for constructing the planar minimum spanning tree in time $O(n\log n)$ within this model, for instance by using the Delaunay triangulation, are optimal.[46] However, these lower bounds do not apply to models of computation with integer point coordinates, in which bitwise operations and table indexing operations on those coordinates are permitted. In these models, faster algorithms are possible, as described above.[26] Applications An obvious application of Euclidean minimum spanning trees is to find the cheapest network of wires or pipes to connect a set of places, assuming the links cost a fixed amount per unit length. The first publications on minimum spanning trees more generally concerned a geographic version of the problem, involving the design of an electrical grid for southern Moravia,[47] and an application to minimizing wire lengths in circuits was described in 1957 by Loberman and Weinberger.[48] Minimum spanning trees are closely related to single-linkage clustering, one of several methods for hierarchical clustering. The edges of a minimum spanning tree, sorted by their length, give the order in which to merge clusters into larger clusters in this clustering method. Once these edges have been found, by any algorithm, they may be used to construct the single-linkage clustering in time $O(n\log n)$.[1] Although the long thin cluster shapes produced by single-linkage clustering can be a bad fit for certain types of data, such as mixtures of Gaussian distributions, it can be a good choice in applications where the clusters themselves are expected to have long thin shapes, such as in modeling the dark matter halos of galaxies.[5] In geographic information science, several researcher groups have used minimum spanning trees of the centroids of buildings to identify meaningful clusters of buildings, for instance by removing edges identified in some other way as inconsistent.[49] Minimum spanning trees have also been used to infer the shape of curves in the plane, given points sampled along the curve. For a smooth curve, sampled more finely than its local feature size, the minimum spanning tree will form a path connecting consecutive points along the curve. More generally, similar methods can recognize curves drawn in a dotted or dashed style rather than as a single connected set. Applications of this curve-finding technique include particle physics, in automatically identifying the tracks left by particles in a bubble chamber.[50] More sophisticated versions of this idea can find curves from a cloud of noisy sample points that roughly follows the curve outline, by using the topology of the spanning tree to guide a moving least squares method.[51] Another application of minimum spanning trees is a constant-factor approximation algorithm for the Euclidean traveling salesman problem, the problem of finding the shortest polygonalization of a point set. Walking around the boundary of the minimum spanning tree can approximate the optimal traveling salesman tour within a factor of two of the optimal length.[2] However, more accurate polynomial-time approximation schemes are known for this problem.[52] In wireless ad hoc networks, broadcasting messages along paths in a minimum spanning tree can be an accurate approximation to the minimum-energy broadcast routing, which is, again, hard to compute exactly.[53][54][55][56] Realization The realization problem for Euclidean minimum spanning trees takes an abstract tree as input and seeks a geometric location for each vertex of the tree (in a space of some fixed dimension), such that the given tree equals the minimum spanning tree of those points. Not every abstract tree has such a realization; for instance, the tree must obey the kissing number bound on the degree of each vertex. Additional restrictions exist; for instance, it is not possible for a planar minimum spanning tree to have a degree-six vertex adjacent to a vertex of degree five or six.[7] Determining whether a two-dimensional realization exists is NP-hard. However, the proof of hardness depends on the fact that degree-six vertices in a tree have a very restricted set of realizations: the neighbors of such a vertex must be placed on the vertices of a regular hexagon centered at that vertex.[57] Indeed, for trees of maximum degree five, a planar realization always exists.[7] Similarly, for trees of maximum degree ten, a three-dimensional realization always exists.[10] For these realizations, some trees may require edges of exponential length and bounding boxes of exponential area relative to the length of their shortest edge.[58] Trees of maximum degree four have smaller planar realizations, with polynomially bounded edge lengths and bounding boxes.[9] See also • Rectilinear minimum spanning tree, a minimum spanning tree with distances measured using taxicab geometry References 1. Gower, J. C.; Ross, G. J. S. (1969), "Minimum spanning trees and single linkage cluster analysis", Applied Statistics, 18 (1): 54–61, doi:10.2307/2346439, JSTOR 2346439, MR 0242315 2. Shamos, Michael Ian; Hoey, Dan (1975), "Closest-point problems", 16th Annual Symposium on Foundations of Computer Science, Berkeley, California, USA, October 13-15, 1975, IEEE Computer Society, pp. 151–162, doi:10.1109/SFCS.1975.8, MR 0426498, S2CID 40615455 3. Bose, Prosenjit; Devroye, Luc; Evans, William; Kirkpatrick, David (2006), "On the spanning ratio of Gabriel graphs and β-skeletons", SIAM Journal on Discrete Mathematics, 20 (2): 412–427, doi:10.1137/S0895480197318088, MR 2257270 4. Agarwal, P. K.; Edelsbrunner, H.; Schwarzkopf, O.; Welzl, E. (1991), "Euclidean minimum spanning trees and bichromatic closest pairs", Discrete & Computational Geometry, Springer, 6 (1): 407–422, doi:10.1007/BF02574698, MR 1115099 5. March, William B.; Ram, Parikshit; Gray, Alexander G. (2010), "Fast Euclidean minimum spanning tree: algorithm, analysis, and applications", in Rao, Bharat; Krishnapuram, Balaji; Tomkins, Andrew; Yang, Qiang (eds.), Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, July 25-28, 2010, pp. 603–612, doi:10.1145/1835804.1835882, S2CID 186025 6. Clarkson, Kenneth L. (1989), "An algorithm for geometric minimum spanning trees requiring nearly linear expected time", Algorithmica, 4 (1–4): 461–469, doi:10.1007/BF01553902, MR 1019387, S2CID 22176641 7. Monma, Clyde; Suri, Subhash (1992), "Transitions in geometric minimum spanning trees", Discrete & Computational Geometry, 8 (3): 265–293, doi:10.1007/BF02293049, MR 1174358, S2CID 30101649 8. Narasimhan, Giri; Zachariasen, Martin; Zhu, Jianlin (2000), "Experiments with computing geometric minimum spanning trees", Proceedings of the 2nd Workshop on Algorithm Engineering and Experiments, pp. 183–196 9. Frati, Fabrizio; Kaufmann, Michael (2011), "Polynomial area bounds for MST embeddings of trees", Computational Geometry: Theory and Applications, 44 (9): 529–543, doi:10.1016/j.comgeo.2011.05.005, MR 2819643, S2CID 5634139 10. King, James A. (2006), "Realization of degree 10 minimum spanning trees in 3-space" (PDF), Proceedings of the 18th Annual Canadian Conference on Computational Geometry, CCCG 2006, August 14-16, 2006, Queen's University, Ontario, Canada, pp. 39–42 11. Krznaric, Drago; Levcopoulos, Christos; Nilsson, Bengt J. (1999), "Minimum spanning trees in $d$ dimensions", Nordic Journal of Computing, 6 (4): 446–461, MR 1736451. This version of this paper is not available online; instead, see the 1997 conference version of the same paper, doi:10.1007/3-540-63397-9_26. 12. Gilbert, E. N.; Pollak, H. O. (1968), "Steiner minimal trees", SIAM Journal on Applied Mathematics, 16 (1): 1–29, doi:10.1137/0116001, JSTOR 2099400, MR 0223269 13. Eppstein, David (1999), "Spanning trees and spanners" (PDF), in Sack, J.-R.; Urrutia, J. (eds.), Handbook of Computational Geometry, Elsevier, pp. 425–461, MR 1746681 14. Georgakopoulos, George; Papadimitriou, Christos H. (1987), "The 1-Steiner tree problem", Journal of Algorithms, 8 (1): 122–130, doi:10.1016/0196-6774(87)90032-0, MR 0875330 15. Robins, G.; Salowe, J. S. (1995), "Low-degree minimum spanning trees", Discrete & Computational Geometry, 14 (2): 151–165, doi:10.1007/BF02570700, MR 1331924, S2CID 16040977 16. Pfender, Florian; Ziegler, Günter M. (September 2004), "Kissing numbers, sphere packings, and some unexpected proofs" (PDF), Notices of the American Mathematical Society: 873–883 17. Steele, J. Michael; Shepp, Lawrence A.; Eddy, William F. (1987), "On the number of leaves of a Euclidean minimal spanning tree", Journal of Applied Probability, 24 (4): 809–826, doi:10.2307/3214207, JSTOR 3214207, MR 0913823, S2CID 29026025 18. Preparata, Franco P.; Shamos, Michael Ian (1985), Computational Geometry: An Introduction, Texts and Monographs in Computer Science, Springer-Verlag, New York, p. 263, doi:10.1007/978-1-4612-1098-6, ISBN 0-387-96131-3, MR 0805539, S2CID 206656565 19. Toussaint, G. T. (1980), "Comment: Algorithms for computing relative neighborhood graph", Electronics Letters, 16 (22): 860, Bibcode:1980ElL....16..860T, doi:10.1049/el:19800611; reply by Urquhart, pp. 860–861 20. Yao, Andrew Chi Chih (1982), "On constructing minimum spanning trees in k-dimensional spaces and related problems", SIAM Journal on Computing, 11 (4): 721–736, doi:10.1137/0211059, MR 0677663 21. Bentley, Jon Louis; Weide, Bruce W.; Yao, Andrew C. (1980), "Optimal expected-time algorithms for closest point problems", ACM Transactions on Mathematical Software, 6 (4): 563–580, doi:10.1145/355921.355927, MR 0599977, S2CID 17238717 22. Steele, J. Michael; Snyder, Timothy Law (1989), "Worst-case growth rates of some classical problems of combinatorial optimization", SIAM Journal on Computing, 18 (2): 278–287, doi:10.1137/0218019, MR 0986667 23. Steele, J. Michael (1988), "Growth rates of Euclidean minimal spanning trees with power weighted edges", Annals of Probability, 16 (4): 1767–1787, doi:10.1214/aop/1176991596, JSTOR 2243991, MR 0958215 24. Penrose, Mathew D. (1997), "The longest edge of the random minimal spanning tree", The Annals of Applied Probability, 7 (2): 340–361, doi:10.1214/aoap/1034625335, MR 1442317 25. Boyce, W. M.; Garey, M. R.; Johnson, D. S. (1978), "A note on bisecting minimum spanning trees", Networks, 8 (3): 187–192, doi:10.1002/net.3230080302, MR 0491324 26. Buchin, Kevin; Mulzer, Wolfgang (2011), "Delaunay triangulations in O(sort(n)) time and more", Journal of the ACM, 58 (2): A6:1–A6:27, doi:10.1145/1944345.1944347, MR 2786587, S2CID 11316974 27. Mareš, Martin (2004), "Two linear time algorithms for MST on minor closed graph classes" (PDF), Archivum Mathematicum, 40 (3): 315–320, MR 2107027 28. Devillers, Olivier (1992), "Randomization yields simple O(n log* n) algorithms for difficult Ω(n) problems" (PDF), International Journal of Computational Geometry & Applications, 2 (1): 97–111, doi:10.1142/S021819599200007X, MR 1159844, S2CID 60203 29. O'Rourke, J.; Demaine, E. (2001–2002), "Problem 5: Euclidean Minimum Spanning Tree", The Open Problems Project, Smith College 30. Dwyer, Rex A. (1991), "Higher-dimensional Voronoi diagrams in linear expected time", Discrete & Computational Geometry, 6 (4): 343–367, doi:10.1007/BF02574694, MR 1098813 31. Karger, David R.; Klein, Philip N.; Tarjan, Robert E. (1995), "A randomized linear-time algorithm to find minimum spanning trees", Journal of the ACM, 42 (2): 321–328, doi:10.1145/201019.201022, MR 1409738, S2CID 832583 32. Chatterjee, S.; Connor, M.; Kumar, P. (2010), "Geometric minimum spanning trees with GeoFilterKruskal", in Festa, Paola (ed.), Experimental Algorithms: 9th International Symposium, SEA 2010, Ischia Island, Naples, Italy, May 20-22, 2010, Proceedings, Lecture Notes in Computer Science, vol. 6049, Springer-Verlag, pp. 486–500, doi:10.1007/978-3-642-13193-6_41 33. Arya, Sunil; Mount, David M. (2016), "A fast and simple algorithm for computing approximate Euclidean minimum spanning trees", in Krauthgamer, Robert (ed.), Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pp. 1220–1233, doi:10.1137/1.9781611974331.ch85, MR 3478461 34. Eppstein, David (1994), "Offline algorithms for dynamic minimum spanning tree problems", Journal of Algorithms, 17 (2): 237–250, doi:10.1006/jagm.1994.1033, MR 1291541 35. Chan, Timothy M. (2010), "A dynamic data structure for 3-D convex hulls and 2-D nearest neighbor queries", Journal of the ACM, 57 (3): Article 16, doi:10.1145/1706591.1706596, MR 2665885, S2CID 47454142 36. Eppstein, David (1995), "Dynamic Euclidean minimum spanning trees and extrema of binary functions", Discrete & Computational Geometry, 13 (1): 111–122, doi:10.1007/BF02574030, MR 1300511, S2CID 7339165 37. Katoh, N.; Tokuyama, T.; Iwano, K. (1995), "On minimum and maximum spanning trees of linearly moving points", Discrete & Computational Geometry, 13 (2): 161–176, doi:10.1007/BF02574035, MR 1314960 38. Chan, Timothy M. (2003), "On levels in arrangements of curves", Discrete & Computational Geometry, 29 (3): 375–393, doi:10.1007/s00454-002-2840-2, MR 1961005, S2CID 18966889 39. Rahmati, Zahed; Zarei, Alireza (2010), "Combinatorial changes of Euclidean minimum spanning tree of moving points in the plane" (PDF), Proceedings of the 22nd Annual Canadian Conference on Computational Geometry, Winnipeg, Manitoba, Canada, August 9-11, 2010, pp. 43–45 40. Akitaya, Hugo A.; Biniaz, Ahmad; Bose, Prosenjit; De Carufel, Jean-Lou; Maheshwari, Anil; da Silveira, Luís Fernando Schultz Xavier; Smid, Michiel (2021), "The minimum moving spanning tree problem", in Lubiw, Anna; Salavatipour, Mohammad R. (eds.), Algorithms and Data Structures: 17th International Symposium, WADS 2021, Virtual Event, August 9–11, 2021, Proceedings, Lecture Notes in Computer Science, vol. 12808, Springer, pp. 15–28, doi:10.1007/978-3-030-83508-8_2, S2CID 234599877 41. Basch, Julien; Guibas, Leonidas J.; Zhang, Li (1997), "Proximity problems on moving points", in Boissonnat, Jean-Daniel (ed.), Proceedings of the Thirteenth Annual Symposium on Computational Geometry, Nice, France, June 4–6, 1997, Association for Computing Machinery, pp. 344–351, doi:10.1145/262839.262998, S2CID 15556637 42. Agarwal, Pankaj K.; Eppstein, David; Guibas, Leonidas J.; Henzinger, Monika Rauch (1998), "Parametric and Kinetic Minimum Spanning Trees", 39th Annual Symposium on Foundations of Computer Science, FOCS '98, November 8–11, 1998, Palo Alto, California, USA (PDF), IEEE Computer Society, pp. 596–605, doi:10.1109/SFCS.1998.743510, S2CID 2559456 43. Rahmati, Zahed; Zarei, Alireza (2012), "Kinetic Euclidean minimum spanning tree in the plane", Journal of Discrete Algorithms, 16: 2–11, doi:10.1016/j.jda.2012.04.009, MR 2960341 44. Rahmati, Zahed; Abam, Mohammad Ali; King, Valerie; Whitesides, Sue; Zarei, Alireza (2015), "A simple, faster method for kinetic proximity problems", Computational Geometry: Theory & Applications, 48 (4): 342–359, doi:10.1016/j.comgeo.2014.12.002, MR 3296072, S2CID 18971251 45. Meulemans, Wouter; Speckmann, Bettina; Verbeek, Kevin; Wulms, Jules (2018), "A framework for algorithm stability and its application to kinetic Euclidean MSTs", in Bender, Michael A.; Farach-Colton, Martin; Mosteiro, Miguel A. (eds.), LATIN 2018: Theoretical Informatics – 13th Latin American Symposium, Buenos Aires, Argentina, April 16–19, 2018, Proceedings, Lecture Notes in Computer Science, vol. 10807, Springer, pp. 805–819, doi:10.1007/978-3-319-77404-6_58, S2CID 4709616 46. Yao, Andrew Chi-Chih (1991), "Lower bounds for algebraic computation trees with integer inputs", SIAM Journal on Computing, 20 (4): 655–668, doi:10.1137/0220041, MR 1105929 47. Graham, R. L.; Hell, Pavol (1985), "On the history of the minimum spanning tree problem", IEEE Annals of the History of Computing, 7 (1): 43–57, doi:10.1109/mahc.1985.10011, MR 0783327, S2CID 10555375 48. Loberman, H.; Weinberger, A. (October 1957), "Formal procedures for connecting terminals with a minimum total wire length", Journal of the ACM, 4 (4): 428–437, doi:10.1145/320893.320896, S2CID 7320964 49. Wu, Bin; Yu, Bailang; Wu, Qiusheng; Chen, Zuoqi; Yao, Shenjun; Huang, Yan; Wu, Jianping (October 2017), "An extended minimum spanning tree method for characterizing local urban patterns", International Journal of Geographical Information Science, 32 (3): 450–475, doi:10.1080/13658816.2017.1384830, S2CID 46772272 50. Zahn, C. T. (1973), "Using the minimum spanning tree to recognize dotted and dashed curves", 1st International Computing Symposium, Davos, Switzerland, 4–7 September 1973 51. Lee, In-Kwon (2000), "Curve reconstruction from unorganized points", Computer Aided Geometric Design, 17 (2): 161–177, doi:10.1016/S0167-8396(99)00044-8, MR 1733203 52. Bartal, Yair; Gottlieb, Lee-Ad (2013), "A linear time approximation scheme for Euclidean TSP", 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA, pp. 698–706, doi:10.1109/FOCS.2013.80, MR 3246273, S2CID 17514182 53. Wan, P.-J.; Călinescu, G.; Li, X.-Y.; Frieder, O. (2002), "Minimum-energy broadcasting in static ad hoc wireless networks", Wireless Networks, 8 (6): 607–617, doi:10.1023/a:1020381720601, S2CID 1297518 54. Clementi, Andrea E. F.; Huiban, Gurvan; Rossi, Gianluca; Verhoeven, Yann C.; Penna, Paolo (2003), "On the approximation ratio of the MST-based heuristic for the energy-efficient broadcast problem in static ad-hoc radio networks", 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), 22-26 April 2003, Nice, France, Proceedings, IEEE Computer Society, p. 222, doi:10.1109/IPDPS.2003.1213407, S2CID 17863487 55. Flammini, Michele; Klasing, Ralf; Navarra, Alfredo; Perennes, Stephane (2007), "Improved approximation results for the minimum energy broadcasting problem", Algorithmica, 49 (4): 318–336, doi:10.1007/s00453-007-9077-7, MR 2358524, S2CID 11982404 56. Ambühl, Christoph (2005), "An optimal bound for the MST algorithm to compute energy efficient broadcast trees in wireless networks", in Caires, Luís; Italiano, Giuseppe F.; Monteiro, Luís; Palamidessi, Catuscia; Yung, Moti (eds.), Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, Lisbon, Portugal, July 11-15, 2005, Proceedings, Lecture Notes in Computer Science, vol. 3580, Springer, pp. 1139–1150, doi:10.1007/11523468_92 57. Eades, Peter; Whitesides, Sue (1996), "The realization problem for Euclidean minimum spanning trees is NP-hard", Algorithmica, 16 (1): 60–82, doi:10.1007/s004539900037, MR 1394494 58. Angelini, Patrizio; Bruckdorfer, Till; Chiesa, Marco; Frati, Fabrizio; Kaufmann, Michael; Squarcella, Claudio (2014), "On the area requirements of Euclidean minimum spanning trees", Computational Geometry: Theory and Applications, 47 (2, part B): 200–213, doi:10.1016/j.comgeo.2012.10.011, MR 3123788 External links • EMST tutorial, mlpack documentation
Wikipedia
Sample-continuous process In mathematics, a sample-continuous process is a stochastic process whose sample paths are almost surely continuous functions. Definition Let (Ω, Σ, P) be a probability space. Let X : I × Ω → S be a stochastic process, where the index set I and state space S are both topological spaces. Then the process X is called sample-continuous (or almost surely continuous, or simply continuous) if the map X(ω) : I → S is continuous as a function of topological spaces for P-almost all ω in Ω. In many examples, the index set I is an interval of time, [0, T] or [0, +∞), and the state space S is the real line or n-dimensional Euclidean space Rn. Examples • Brownian motion (the Wiener process) on Euclidean space is sample-continuous. • For "nice" parameters of the equations, solutions to stochastic differential equations are sample-continuous. See the existence and uniqueness theorem in the stochastic differential equations article for some sufficient conditions to ensure sample continuity. • The process X : [0, +∞) × Ω → R that makes equiprobable jumps up or down every unit time according to ${\begin{cases}X_{t}\sim \mathrm {Unif} (\{X_{t-1}-1,X_{t-1}+1\}),&t{\mbox{ an integer;}}\\X_{t}=X_{\lfloor t\rfloor },&t{\mbox{ not an integer;}}\end{cases}}$ is not sample-continuous. In fact, it is surely discontinuous. Properties • For sample-continuous processes, the finite-dimensional distributions determine the law, and vice versa. See also • Continuous stochastic process References • Kloeden, Peter E.; Platen, Eckhard (1992). Numerical solution of stochastic differential equations. Applications of Mathematics (New York) 23. Berlin: Springer-Verlag. pp. 38–39. ISBN 3-540-54062-8. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Sample mean and covariance The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases. The term "sample mean" can also be used to refer to a vector of average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a sample variance-covariance matrix (or simply covariance matrix) showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix. Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent the location and dispersion of the distribution of values in the sample, and to estimate the values for the population. Definition of the sample mean Further information: Arithmetic mean The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample of N observations on variable X is taken from the population, the sample mean is: ${\bar {X}}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}.$ Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean is ${\bar {x}}=(1+4+1)/3=2$, as compared to the population mean of $\mu =(1+1+3+4+0+2+1+0)/8=12/8=1.5$. Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1. If the statistician is interested in K variables rather than one, each observation having a value for each of those K variables, the overall sample mean consists of K sample means for individual variables. Let $x_{ij}$ be the ith independently drawn observation (i=1,...,N) on the jth random variable (j=1,...,K). These observations can be arranged into N column vectors, each with K entries, with the K×1 column vector giving the i-th observations of all variables being denoted $\mathbf {x} _{i}$ (i=1,...,N). The sample mean vector $\mathbf {\bar {x}} $ is a column vector whose j-th element ${\bar {x}}_{j}$ is the average value of the N observations of the jth variable: ${\bar {x}}_{j}={\frac {1}{N}}\sum _{i=1}^{N}x_{ij},\quad j=1,\ldots ,K.$ Thus, the sample mean vector contains the average of the observations for each variable, and is written $\mathbf {\bar {x}} ={\frac {1}{N}}\sum _{i=1}^{N}\mathbf {x} _{i}={\begin{bmatrix}{\bar {x}}_{1}\\\vdots \\{\bar {x}}_{j}\\\vdots \\{\bar {x}}_{K}\end{bmatrix}}$ Definition of sample covariance See also: Sample variance The sample covariance matrix is a K-by-K matrix $\textstyle \mathbf {Q} =\left[q_{jk}\right]$ with entries $q_{jk}={\frac {1}{N-1}}\sum _{i=1}^{N}\left(x_{ij}-{\bar {x}}_{j}\right)\left(x_{ik}-{\bar {x}}_{k}\right),$ where $q_{jk}$ is an estimate of the covariance between the jth variable and the kth variable of the population underlying the data. In terms of the observation vectors, the sample covariance is $\mathbf {Q} ={1 \over {N-1}}\sum _{i=1}^{N}(\mathbf {x} _{i}.-\mathbf {\bar {x}} )(\mathbf {x} _{i}.-\mathbf {\bar {x}} )^{\mathrm {T} },$ Alternatively, arranging the observation vectors as the columns of a matrix, so that $\mathbf {F} ={\begin{bmatrix}\mathbf {x} _{1}&\mathbf {x} _{2}&\dots &\mathbf {x} _{N}\end{bmatrix}}$, which is a matrix of K rows and N columns. Here, the sample covariance matrix can be computed as $\mathbf {Q} ={\frac {1}{N-1}}(\mathbf {F} -\mathbf {\bar {x}} \,\mathbf {1} _{N}^{\mathrm {T} })(\mathbf {F} -\mathbf {\bar {x}} \,\mathbf {1} _{N}^{\mathrm {T} })^{\mathrm {T} }$, where $\mathbf {1} _{N}$ is an N by 1 vector of ones. If the observations are arranged as rows instead of columns, so $\mathbf {\bar {x}} $ is now a 1×K row vector and $\mathbf {M} =\mathbf {F} ^{\mathrm {T} }$ is an N×K matrix whose column j is the vector of N observations on variable j, then applying transposes in the appropriate places yields $\mathbf {Q} ={\frac {1}{N-1}}(\mathbf {M} -\mathbf {1} _{N}\mathbf {\bar {x}} )^{\mathrm {T} }(\mathbf {M} -\mathbf {1} _{N}\mathbf {\bar {x}} ).$ Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix $\mathbf {A} $ the matrix $\mathbf {A} ^{T}\mathbf {A} $ is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the $\mathbf {x} _{i}.-\mathbf {\bar {x}} $ vectors is K. Unbiasedness The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector $\textstyle \mathbf {X} $, a row vector whose jth element (j = 1, ..., K) is one of the random variables.[1] The sample covariance matrix has $\textstyle N-1$ in the denominator rather than $\textstyle N$ due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean $\operatorname {E} (\mathbf {X} )$ is known, the analogous unbiased estimate $q_{jk}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{ij}-\operatorname {E} (X_{j})\right)\left(x_{ik}-\operatorname {E} (X_{k})\right),$ using the population mean, has $\textstyle N$ in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters). The maximum likelihood estimate of the covariance $q_{jk}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{ij}-{\bar {x}}_{j}\right)\left(x_{ik}-{\bar {x}}_{k}\right)$ for the Gaussian distribution case has N in the denominator as well. The ratio of 1/N to 1/(N − 1) approaches 1 for large N, so the maximum likelihood estimate approximately equals the unbiased estimate when the sample is large. Distribution of the sample mean Main article: Standard error of the mean For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean $E(X_{j})$ and variance equal to $\sigma _{j}^{2}/N$, where $\sigma _{j}^{2}$ is the population variance. The arithmetic mean of a population, or population mean, is often denoted μ.[2] The sample mean ${\bar {x}}$ (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of n independent observations, the expected value of the sample mean is $\operatorname {E} ({\bar {x}})=\mu $ and the variance of the sample mean is $\operatorname {var} ({\bar {x}})={\frac {\sigma ^{2}}{n}}.$ If the samples are not independent, but correlated, then special care has to be taken in order to avoid the problem of pseudoreplication. If the population is normally distributed, then the sample mean is normally distributed as follows: ${\bar {x}}\thicksim N\left\{\mu ,{\frac {\sigma ^{2}}{n}}\right\}.$ If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and σ2/n < +∞. This is a consequence of the central limit theorem. Weighted samples In a weighted sample, each vector $\textstyle {\textbf {x}}_{i}$ (each set of single observations on each of the K random variables) is assigned a weight $\textstyle w_{i}\geq 0$. Without loss of generality, assume that the weights are normalized: $\sum _{i=1}^{N}w_{i}=1.$ (If they are not, divide the weights by their sum). Then the weighted mean vector $\textstyle \mathbf {\bar {x}} $ is given by $\mathbf {\bar {x}} =\sum _{i=1}^{N}w_{i}\mathbf {x} _{i}.$ and the elements $q_{jk}$ of the weighted covariance matrix $\textstyle \mathbf {Q} $ are [3] $q_{jk}={\frac {1}{1-\sum _{i=1}^{N}w_{i}^{2}}}\sum _{i=1}^{N}w_{i}\left(x_{ij}-{\bar {x}}_{j}\right)\left(x_{ik}-{\bar {x}}_{k}\right).$ If all weights are the same, $\textstyle w_{i}=1/N$, the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above. Criticism The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location,[4] and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean. See also • Estimation of covariance matrices • Scatter matrix • Unbiased estimation of standard deviation References 1. Richard Arnold Johnson; Dean W. Wichern (2007). Applied Multivariate Statistical Analysis. Pearson Prentice Hall. ISBN 978-0-13-187715-3. Retrieved 10 August 2012. 2. Underhill, L.G.; Bradfield d. (1998) Introstat, Juta and Company Ltd. ISBN 0-7021-3838-X p. 181 3. Mark Galassi, Jim Davies, James Theiler, Brian Gough, Gerard Jungman, Michael Booth, and Fabrice Rossi. GNU Scientific Library - Reference manual, Version 2.6, 2021. Section Statistics: Weighted Samples 4. The World Question Center 2006: The Sample Mean, Bart Kosko
Wikipedia
Sample ratio mismatch In the design of experiments, a sample ratio mismatch (SRM) is a statistically significant difference between the expected and actual ratios of the sizes of treatment and control groups in an experiment. Sample ratio mismatches also known as unbalanced sampling[1] often occur in online controlled experiments due to failures in randomization and instrumentation.[2] Sample ratio mismatches can be detected using a chi-squared test.[3] Using methods to detect SRM can help non-experts avoid making discussions using biased data.[4] If the sample size is large enough, even a small discrepancy between the observed and expected group sizes can invalidate the results of an experiment.[5][6] Example Suppose we run an A/B test in which we randomly assign 1000 users to equally sized treatment and control groups (a 50–50 split). The expected size of each group is 500. However, the actual sizes of the treatment and control groups are 600 and 400. Using Pearson's chi-squared goodness of fit test, we find a sample ratio mismatch with a p-value of 2.54 × 10-10. In other words, if the assignment of users were truly random, the probability that these treatment and control group sizes would occur by chance is 2.54 × 10-10.[7] References 1. Esteller-Cucala, Maria; Fernandez, Vicenc; Villuendas, Diego (2019-06-06). "Experimentation Pitfalls to Avoid in A/B Testing for Online Personalization". Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization. ACM. pp. 153–159. doi:10.1145/3314183.3323853. ISBN 978-1-4503-6711-0. S2CID 190007129. 2. Fabijan, Aleksander; Gupchup, Jayant; Gupta, Somit; Omhover, Jeff; Qin, Wen; Vermeer, Lukas; Dmitriev, Pavel (2019-07-25). "Diagnosing Sample Ratio Mismatch in Online Controlled Experiments". Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM. pp. 2156–2164. doi:10.1145/3292500.3330722. ISBN 978-1-4503-6201-6. S2CID 196199621. 3. Nie, Keyu; Zhang, Zezhong; Xu, Bingquan; Yuan, Tao (2022-10-17). "Ensure A/B Test Quality at Scale with Automated Randomization Validation and Sample Ratio Mismatch Detection". Proceedings of the 31st ACM International Conference on Information & Knowledge Management. ACM. pp. 3391–3399. arXiv:2208.07766. doi:10.1145/3511808.3557087. ISBN 978-1-4503-9236-5. S2CID 251594683. 4. Vermeer, Lukas; Anderson, Kevin; Acebal, Mauricio (2022-06-13). "Automated Sample Ratio Mismatch (SRM) detection and analysis". The International Conference on Evaluation and Assessment in Software Engineering 2022. ACM. pp. 268–269. doi:10.1145/3530019.3534982. ISBN 978-1-4503-9613-4. S2CID 249579055. 5. Fabijan, Aleksander; Gupchup, Jayant; Gupta, Somit; Omhover, Jeff; Qin, Wen; Vermeer, Lukas; Dmitriev, Pavel. "Diagnosing Sample Ratio Mismatch in Online Controlled Experiments: A Taxonomy and Rules of Thumb for Practitioners" (PDF). Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '19). doi:10.1145/3292500.3330722. S2CID 196199621. 6. Kohavi, Ron; Thomke, Stefan (2017-09-01). "The Surprising Power of Online Experiments". Harvard Business Review. ISSN 0017-8012. Retrieved 2023-05-19. 7. Vermeer, Lukas. "Frequently Asked Questions". SRM Checker. Retrieved 2022-09-15.
Wikipedia
Statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypothesis. The average (or mean) of sample values is a statistic. The term statistic is used both for the function and for the value of the function on a given sample. When a statistic is being used for a specific purpose, it may be referred to by a name indicating its purpose. When a statistic is used for estimating a population parameter, the statistic is called an estimator. A population parameter is any characteristic of a population under study, but when it is not feasible to directly measure the value of a population parameter, statistical methods are used to infer the likely value of the parameter on the basis of a statistic computed from a sample taken from the population. For example, the sample mean is an unbiased estimator of the population mean. This means that the expected value of the sample mean equals the true population mean.[1] A descriptive statistic is used to summarize the sample data. A test statistic is used in statistical hypothesis testing. A single statistic can be used for multiple purposes – for example, the sample mean can be used to estimate the population mean, to describe a sample data set, or to test a hypothesis. Examples Some examples of statistics are: • "In a recent survey of Americans, 52% of Republicans say global warming is happening." In this case, "52%" is a statistic, namely the percentage of Republicans in the survey sample who believe in global warming. The population is the set of all Republicans in the United States, and the population parameter being estimated is the percentage of all Republicans in the United States, not just those surveyed, who believe in global warming. • "The manager of a large hotel located near Disney World indicated that 20 selected guests had a mean length of stay equal to 5.6 days." In this example, "5.6 days" is a statistic, namely the mean length of stay for our sample of 20 hotel guests. The population is the set of all guests of this hotel, and the population parameter being estimated is the mean length of stay for all guests.[2] Whether the estimator is unbiased in this case depends upon the sample selection process; see the inspection paradox. There are a variety of functions that are used to calculate statistics. Some include: • Sample mean, sample median, and sample mode • Sample variance and sample standard deviation • Sample quantiles besides the median, e.g., quartiles and percentiles • Test statistics, such as t-statistic, chi-squared statistic, f statistic • Order statistics, including sample maximum and minimum • Sample moments and functions thereof, including kurtosis and skewness • Various functionals of the empirical distribution function Properties Observability Statisticians often contemplate a parameterized family of probability distributions, any member of which could be the distribution of some measurable aspect of each member of a population, from which a sample is drawn randomly. For example, the parameter may be the average height of 25-year-old men in North America. The height of the members of a sample of 100 such men are measured; the average of those 100 numbers is a statistic. The average of the heights of all members of the population is not a statistic unless that has somehow also been ascertained (such as by measuring every member of the population). The average height that would be calculated using all of the individual heights of all 25-year-old North American men is a parameter, and not a statistic. Statistical properties Important potential properties of statistics include completeness, consistency, sufficiency, unbiasedness, minimum mean square error, low variance, robustness, and computational convenience. Information of a statistic Information of a statistic on model parameters can be defined in several ways. The most common is the Fisher information, which is defined on the statistic model induced by the statistic. Kullback information measure can also be used. See also Look up statistic in Wiktionary, the free dictionary. • Statistics • Statistical theory • Descriptive statistics • Statistical hypothesis testing • Summary statistic • Well-behaved statistic References 1. Kokoska 2015, p. 296-308. 2. Kokoska 2015, p. 296-297. • Kokoska, Stephen (2015). Introductory Statistics: A Problem-Solving Approach (2nd ed.). New York: W. H. Freeman and Company. ISBN 978-1-4641-1169-3. • Parker, Sybil P (editor in chief). "Statistic". McGraw-Hill Dictionary of Scientific and Technical Terms. Fifth Edition. McGraw-Hill, Inc. 1994. ISBN 0-07-042333-4. Page 1912. • DeGroot and Schervish. "Definition of a Statistic". Probability and Statistics. International Edition. Third Edition. Addison Wesley. 2002. ISBN 0-321-20473-5. Pages 370 to 371. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Sampling design In the theory of finite population sampling, a sampling design specifies for every possible sample its probability of being drawn. Mathematical formulation Mathematically, a sampling design is denoted by the function $P(S)$ which gives the probability of drawing a sample $S.$ An example of a sampling design During Bernoulli sampling, $P(S)$ is given by $P(S)=q^{N_{\text{sample}}(S)}\times (1-q)^{(N_{\text{pop}}-N_{\text{sample}}(S))}$ where for each element $q$ is the probability of being included in the sample and $N_{\text{sample}}(S)$ is the total number of elements in the sample $S$ and $N_{\text{pop}}$ is the total number of elements in the population (before sampling commenced). Sample design for managerial research In business research, companies must often generate samples of customers, clients, employees, and so forth to gather their opinions. Sample design is also a critical component of marketing research and employee research for many organizations. During sample design, firms must answer questions such as: • What is the relevant population, sampling frame, and sampling unit? • What is the appropriate margin of error that should be achieved? • How should sampling error and non-sampling error be assessed and balanced? These issues require very careful consideration, and good commentaries are provided in several sources.[1][2] See also • Bernoulli sampling • Sampling probability • Sampling (statistics) References 1. Salant, Priscilla, I. Dillman, and A. Don. How to conduct your own survey. No. 300.723 S3.. 1994. 2. Hansen, Morris H., William N. Hurwitz, and William G. Madow. "Sample Survey Methods and Theory." (1953). Further reading • Sarndal, Swenson, and Wretman (1992), Model Assisted Survey Sampling, Springer-Verlag, ISBN 0-387-40620-4
Wikipedia
Sampling fraction In sampling theory, the sampling fraction is the ratio of sample size to population size or, in the context of stratified sampling, the ratio of the sample size to the size of the stratum.[1] The formula for the sampling fraction is $f={\frac {n}{N}},$ where n is the sample size and N is the population size. A sampling fraction value close to 1 will occur if the sample size is relatively close to the population size. When sampling from a finite population without replacement, this may cause dependence between individual samples. To correct for this dependence when calculating the sample variance, a finite population correction (or finite population multiplier) of (N-n)/(N-1) may be used. If the sampling fraction is small, less than 0.05, then the sample variance is not appreciably affected by dependence, and the finite population correction may be ignored. [2][3] References 1. Dodge, Yadolah (2003). The Oxford Dictionary of Statistical Terms. Oxford: Oxford University Press. ISBN 0-19-920613-9. 2. Bain, Lee J.; Engelhardt, Max (1992). Introduction to probability and mathematical statistics (2nd ed.). Boston: PWS-KENT Pub. ISBN 0534929303. OCLC 24142279. 3. Scheaffer, Richard L.; Mendenhall, William; Ott, Lyman (2006). Elementary survey sampling (6th ed.). Southbank, Vic.: Thomson Brooks/Cole. ISBN 0495018627. OCLC 58425200.
Wikipedia
Sampling frame In statistics, a sampling frame is the source material or device from which a sample is drawn.[1] It is a list of all those within a population who can be sampled, and may include individuals, households or institutions.[1] Importance of the sampling frame is stressed by Jessen[2] and Salant and Dillman.[3] In many practical situations the frame is a matter of choice to the survey planner, and sometimes a critical one. [...] Some very worthwhile investigations are not undertaken at all because of the lack of an apparent frame; others, because of faulty frames, have ended in a disaster or in cloud of doubt. — Raymond James Jessen Obtaining and organizing a sampling frame In the most straightforward cases, such as when dealing with a batch of material from a production run, or using a census, it is possible to identify and measure every single item in the population and to include any one of them in our sample; this is known as direct element sampling.[1] However, in many other cases this is not possible; either because it is cost-prohibitive (reaching every citizen of a country) or impossible (reaching all humans alive). Having established the frame, there are a number of ways for organizing it to improve efficiency and effectiveness. It's at this stage that the researcher should decide whether the sample is in fact to be the whole population and would therefore be a census. This list should also facilitate access to the selected sampling units. A frame may also provide additional 'auxiliary information' about its elements; when this information is related to variables or groups of interest, it may be used to improve survey design. While not necessary for simple sampling, a sampling frame used for more advanced sample techniques, such as stratified sampling, may contain additional information (such as demographic information).[1] For instance, an electoral register might include name and sex; this information can be used to ensure that a sample taken from that frame covers all demographic categories of interest. (Sometimes the auxiliary information is less explicit; for instance, a telephone number may provide some information about location. Sampling frame qualities An ideal sampling frame will have the following qualities:[1] • all units have a logical, numerical identifier • all units can be found – their contact information, map location or other relevant information is present • the frame is organized in a logical, systematic fashion • the frame has additional information about the units that allow the use of more advanced sampling frames • every element of the population of interest is present in the frame • every element of the population is present only once in the frame • no elements from outside the population of interest are present in the frame • the data is 'up-to-date'[4] Types of sampling frames The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register or a telephone directory. Other sampling frames can include employment records, school class lists, patient files in a hospital, organizations listed in a thematic database, and so on.[1][5] On a more practical levels, sampling frames have the form of computer files.[1] Not all frames explicitly list population elements; some list only 'clusters'. For example, a street map can be used as a frame for a door-to-door survey; although it doesn't show individual houses, we can select streets from the map and then select houses on those streets. This offers some advantages: such a frame would include people who have recently moved and are not yet on the list frames discussed above, and it may be easier to use because it doesn't require storing data for every unit in the population, only for a smaller number of clusters. Sampling frames problems The sampling frame must be representative of the population and this is a question outside the scope of statistical theory demanding the judgment of experts in the particular subject matter being studied. All the above frames omit some people who will vote at the next election and contain some people who will not; some frames will contain multiple records for the same person. People not in the frame have no prospect of being sampled. Because a cluster-based frame contains less information about the population, it may place constraints on the sample design, possibly requiring the use of less efficient sampling methods and/or making it harder to interpret the resulting data. Statistical theory tells us about the uncertainties in extrapolating from a sample to the frame. It should be expected that sample frames, will always contain some mistakes.[5] In some cases, this may lead to sampling bias.[1] Such bias should be minimized, and identified, although avoiding it completely in a real world is nearly impossible.[1] One should also not assume that sources which claim to be unbiased and representative are such.[1] In defining the frame, practical, economic, ethical, and technical issues need to be addressed. The need to obtain timely results may prevent extending the frame far into the future. The difficulties can be extreme when the population and frame are disjoint. This is a particular problem in forecasting where inferences about the future are made from historical data. In fact, in 1703, when Jacob Bernoulli proposed to Gottfried Leibniz the possibility of using historical mortality data to predict the probability of early death of a living man, Gottfried Leibniz recognized the problem in replying:[6] Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary. — Gottfried Leibniz Leslie Kish posited four basic problems of sampling frames:[7] 1. Missing elements: Some members of the population are not included in the frame. 2. Foreign elements: The non-members of the population are included in the frame. 3. Duplicate entries: A member of the population is surveyed more than once. 4. Groups or clusters: The frame lists clusters instead of individuals. Problems like those listed can be identified by the use of pre-survey tests and pilot studies. References 1. Carl-Erik Särndal; Bengt Swensson; Jan Wretman (2003). Model assisted survey sampling. Springer. pp. 9–12. ISBN 978-0-387-40620-6. Retrieved 2 January 2011. 2. Raymond James Jessen (1978). Statistical survey techniques. Wiley. Retrieved 2 January 2011. 3. Salant, Priscilla, and Don A. Dillman. "How to Conduct your own Survey: Leading professional give you proven techniques for getting reliable results" (1995) 4. Turner, Anthony G. "Sampling frames and master samples" (PDF). United Nations Secretariat. Retrieved December 11, 2012. 5. Roger Sapsford; Victor Jupp (29 March 2006). Data collection and analysis. SAGE. pp. 28–. ISBN 978-0-7619-4363-1. Retrieved 2 January 2011. 6. Peter L. Bernstein (1998). Against the gods: the remarkable story of risk. John Wiley and Sons. pp. 118–. ISBN 978-0-471-29563-1. Retrieved 2 January 2011. 7. Leslie Kish (1995). Survey sampling. Wiley. ISBN 978-0-471-10949-5. Retrieved 11 January 2011.
Wikipedia
Dirac comb In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula $\operatorname {\text{Ш}} _{\ T}(t)\ :=\sum _{k=-\infty }^{\infty }\delta (t-kT)$ :=\sum _{k=-\infty }^{\infty }\delta (t-kT)} for some given period $T$.[1] Here t is a real variable and the sum extends over all integers k. The Dirac delta function $\delta $ and the Dirac comb are tempered distributions.[2][3] The graph of the function resembles a comb (with the $\delta $s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function. The symbol $\operatorname {\text{Ш}} \,\,(t)$, where the period is omitted, represents a Dirac comb of unit period. This implies[1] $\operatorname {\text{Ш}} _{\ T}(t)\ ={\frac {1}{T}}\operatorname {\text{Ш}} \ \!\!\!\left({\frac {t}{T}}\right).$ Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel:[1] $\operatorname {\text{Ш}} _{\ T}(t)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }e^{i2\pi n{\frac {t}{T}}}.$ The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it.[4] Dirac-comb identity The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly $1$, or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta $\delta $. Formally, this yields (Woodward 1953; Brandwood 2003) $\operatorname {comb} _{T}\{1\}=\operatorname {\text{Ш}} _{T}=\operatorname {rep} _{T}\{\delta \},$ where $\operatorname {comb} _{T}\{f(t)\}\triangleq \sum _{k=-\infty }^{\infty }\,f(kT)\,\delta (t-kT)$ and $\operatorname {rep} _{T}\{g(t)\}\triangleq \sum _{k=-\infty }^{\infty }\,g(t-kT).$ In signal processing, this property on one hand allows sampling a function $f(t)$ by multiplication with $\operatorname {\text{Ш}} _{\ T}$, and on the other hand it also allows the periodization of $f(t)$ by convolution with $\operatorname {\text{Ш}} _{T}$ (Bracewell 1986). The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions. Scaling The scaling property of the Dirac comb follows from the properties of the Dirac delta function. Since $\delta (t)={\frac {1}{a}}\ \delta \!\left({\frac {t}{a}}\right)$[5] for positive real numbers $a$, it follows that: $\operatorname {\text{Ш}} _{\ T}\left(t\right)={\frac {1}{T}}\operatorname {\text{Ш}} \,\!\left({\frac {t}{T}}\right),$ $\operatorname {\text{Ш}} _{\ aT}\left(t\right)={\frac {1}{aT}}\operatorname {\text{Ш}} \,\!\left({\frac {t}{aT}}\right)={\frac {1}{a}}\operatorname {\text{Ш}} _{\ T}\!\!\left({\frac {t}{a}}\right).$ Note that requiring positive scaling numbers $a$ instead of negative ones is not a restriction because the negative sign would only reverse the order of the summation within $\operatorname {\text{Ш}} _{\ T}$, which does not affect the result. Fourier series See also: Dirichlet kernel It is clear that $\operatorname {\text{Ш}} _{\ T}(t)$ is periodic with period $T$. That is, $\operatorname {\text{Ш}} _{\ T}(t+T)=\operatorname {\text{Ш}} _{\ T}(t)$ for all t. The complex Fourier series for such a periodic function is $\operatorname {\text{Ш}} _{\ T}(t)=\sum _{n=-\infty }^{+\infty }c_{n}e^{i2\pi n{\frac {t}{T}}},$ where the Fourier coefficients are (symbolically) ${\begin{aligned}c_{n}&={\frac {1}{T}}\int _{t_{0}}^{t_{0}+T}\operatorname {\text{Ш}} _{\ T}(t)e^{-i2\pi n{\frac {t}{T}}}\,dt\quad (-\infty <t_{0}<+\infty )\\&={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}\operatorname {\text{Ш}} _{\ T}(t)e^{-i2\pi n{\frac {t}{T}}}\,dt\\&={\frac {1}{T}}\int _{-{\frac {T}{2}}}^{\frac {T}{2}}\delta (t)e^{-i2\pi n{\frac {t}{T}}}\,dt\\&={\frac {1}{T}}e^{-i2\pi n{\frac {0}{T}}}\\&={\frac {1}{T}}.\end{aligned}}$ All Fourier coefficients are 1/T resulting in $\operatorname {\text{Ш}} _{\ T}(t)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }\!\!e^{i2\pi n{\frac {t}{T}}}.$ When the period is one unit, this simplifies to $\operatorname {\text{Ш}} \ \!(x)=\sum _{n=-\infty }^{\infty }\!\!e^{i2\pi nx}.$ Remark: Most rigorously, Riemann or Lebesgue integration over any products including a Dirac delta function yields zero. For this reason, the integration above (Fourier series coefficients determination) must be understood "in the generalized functions sense". It means that, instead of using the characteristic function of an interval applied to the Dirac comb, one uses a so-called Lighthill unitary function as cutout function, see Lighthill 1958, p.62, Theorem 22 for details. Fourier transform The Fourier transform of a Dirac comb is also a Dirac comb. For the Fourier transform ${\mathcal {F}}$ expressed in frequency domain (Hz) the Dirac comb $\operatorname {\text{Ш}} _{T}$ of period $T$ transforms into a rescaled Dirac comb of period $1/T,$ i.e. for ${\mathcal {F}}\left[f\right](\xi )=\int _{-\infty }^{\infty }dtf(t)e^{-2\pi i\xi t},$ ${\mathcal {F}}\left[\operatorname {\text{Ш}} _{T}\right](\xi )={\frac {1}{T}}\sum _{k=-\infty }^{\infty }\delta (\xi -k{\frac {1}{T}})={\frac {1}{T}}\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}(\xi )~$ is proportional to another Dirac comb, but with period $1/T$ in frequency domain (radian/s). The Dirac comb $\operatorname {\text{Ш}} $ of unit period $T=1$ is thus an eigenfunction of ${\mathcal {F}}$ to the eigenvalue $1.$ This result can be established (Bracewell 1986) by considering the respective Fourier transforms $S_{\tau }(\xi )={\mathcal {F}}[s_{\tau }](\xi )$ of the family of functions $s_{\tau }(x)$ defined by $s_{\tau }(x)=\tau ^{-1}e^{-\pi \tau ^{2}x^{2}}\sum _{n=-\infty }^{\infty }e^{-\pi \tau ^{-2}(x-n)^{2}}.$ Since $s_{\tau }(x)$ is a convergent series of Gaussian functions, and Gaussians transform into Gaussians, each of their respective Fourier transforms $S_{\tau }(\xi )$ also results in a series of Gaussians, and explicit calculation establishes that $S_{\tau }(\xi )=\tau ^{-1}\sum _{m=-\infty }^{\infty }e^{-\pi \tau ^{2}m^{2}}e^{-\pi \tau ^{-2}(\xi -m)^{2}}.$ The functions $s_{\tau }(x)$ and $S_{\tau }(\xi )$ are thus each resembling a periodic function consisting of a series of equidistant Gaussian spikes $\tau ^{-1}e^{-\pi \tau ^{-2}(x-n)^{2}}$ and $\tau ^{-1}e^{-\pi \tau ^{-2}(\xi -m)^{2}}$ whose respective "heights" (pre-factors) are determined by slowly decreasing Gaussian envelope functions which drop to zero at infinity. Note that in the limit $\tau \rightarrow 0$ each Gaussian spike becomes an infinitely sharp Dirac impulse centered respectively at $x=n$ and $\xi =m$ for each respective $n$ and $m$, and hence also all pre-factors $e^{-\pi \tau ^{2}m^{2}}$ in $S_{\tau }(\xi )$ eventually become indistinguishable from $e^{-\pi \tau ^{2}\xi ^{2}}$. Therefore the functions $s_{\tau }(x)$ and their respective Fourier transforms $S_{\tau }(\xi )$ converge to the same function and this limit function is a series of infinite equidistant Gaussian spikes, each spike being multiplied by the same pre-factor of one, i.e. the Dirac comb for unit period: $\lim _{\tau \rightarrow 0}s_{\tau }(x)=\operatorname {\text{Ш}} ({x}),$ and $\lim _{\tau \rightarrow 0}S_{\tau }(\xi )=\operatorname {\text{Ш}} ({\xi }).$ Since $S_{\tau }={\mathcal {F}}[s_{\tau }]$, we obtain in this limit the result to be demonstrated: ${\mathcal {F}}[\operatorname {\text{Ш}} ]=\operatorname {\text{Ш}} .$ The corresponding result for period $T$ can be found by exploiting the scaling property of the Fourier transform, ${\mathcal {F}}[\operatorname {\text{Ш}} _{T}]={\frac {1}{T}}\operatorname {\text{Ш}} _{\frac {1}{T}}.$ Another manner to establish that the Dirac comb transforms into another Dirac comb starts by examining continuous Fourier transforms of periodic functions in general, and then specialises to the case of the Dirac comb. In order to also show that the specific rule depends on the convention for the Fourier transform, this will be shown using angular frequency with $\omega =2\pi \xi :$ :} for any periodic function $f(t)=f(t+T)$ its Fourier transform ${\mathcal {F}}\left[f\right](\omega )=F(\omega )=\int _{-\infty }^{\infty }dtf(t)e^{-i\omega t}$ obeys: $F(\omega )(1-e^{i\omega T})=0$ because Fourier transforming $f(t)$ and $f(t+T)$ leads to $F(\omega )$ and $F(\omega )e^{i\omega T}.$ This equation implies that $F(\omega )=0$ nearly everywhere with the only possible exceptions lying at $\omega =k\omega _{0},$ with $\omega _{0}=2\pi /T$ and $k\in \mathbb {Z} .$ When evaluating the Fourier transform at $F(k\omega _{0})$ the corresponding Fourier series expression times a corresponding delta function results. For the special case of the Fourier transform of the Dirac comb, the Fourier series integral over a single period covers only the Dirac function at the origin and thus gives $1/T$ for each $k.$ This can be summarised by interpreting the Dirac comb as a limit of the Dirichlet kernel such that, at the positions $\omega =k\omega _{0},$ all exponentials in the sum $\sum \nolimits _{m=-\infty }^{\infty }e^{\pm i\omega mT}$ point into the same direction and add constructively. In other words, the continuous Fourier transform of periodic functions leads to $F(\omega )=2\pi \sum _{k=-\infty }^{\infty }c_{k}\delta (\omega -k\omega _{0})$ with $\omega _{0}=2\pi /T,$ and $c_{k}={\frac {1}{T}}\int _{-T/2}^{+T/2}dtf(t)e^{-i2\pi kt/T}.$ The Fourier series coefficients $c_{k}=1/T$ for all $k$ when $f\rightarrow \operatorname {\text{Ш}} _{T}$, i.e. ${\mathcal {F}}\left[\operatorname {\text{Ш}} _{T}\right](\omega )={\frac {2\pi }{T}}\sum _{k=-\infty }^{\infty }\delta (\omega -k{\frac {2\pi }{T}})$ is another Dirac comb, but with period $2\pi /T$ in angular frequency domain (radian/s). As mentioned, the specific rule depends on the convention for the used Fourier transform. Indeed, when using the scaling property of the Dirac delta function, the above may be re-expressed in ordinary frequency domain (Hz) and one obtains again: $\operatorname {\text{Ш}} _{\ T}(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}{\frac {1}{T}}\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}(\xi )=\sum _{n=-\infty }^{\infty }\!\!e^{-i2\pi \xi nT},$ such that the unit period Dirac comb transforms to itself: $\operatorname {\text{Ш}} \ \!(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}\operatorname {\text{Ш}} \ \!(\xi ).$ Finally, the Dirac comb is also an eigenfunction of the unitary continuous Fourier transform in angular frequency space to the eigenvalue 1 when $T={\sqrt {2\pi }}$ because for the unitary Fourier transform ${\mathcal {F}}\left[f\right](\omega )=F(\omega )={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }dtf(t)e^{-i\omega t},$ the above may be re-expressed as $\operatorname {\text{Ш}} _{\ T}(t){\stackrel {\mathcal {F}}{\longleftrightarrow }}{\frac {\sqrt {2\pi }}{T}}\operatorname {\text{Ш}} _{\ {\frac {2\pi }{T}}}(\omega )={\frac {1}{\sqrt {2\pi }}}\sum _{n=-\infty }^{\infty }\!\!e^{-i\omega nT}.$ Sampling and aliasing Multiplying any function by a Dirac comb transforms it into a train of impulses with integrals equal to the value of the function at the nodes of the comb. This operation is frequently used to represent sampling. $(\operatorname {\text{Ш}} _{\ T}x)(t)=\sum _{k=-\infty }^{\infty }\!\!x(t)\delta (t-kT)=\sum _{k=-\infty }^{\infty }\!\!x(kT)\delta (t-kT).$ Due to the self-transforming property of the Dirac comb and the convolution theorem, this corresponds to convolution with the Dirac comb in the frequency domain. $\operatorname {\text{Ш}} _{\ T}x\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\frac {1}{T}}\operatorname {\text{Ш}} _{\frac {1}{T}}*X$ Since convolution with a delta function $\delta (t-kT)$ is equivalent to shifting the function by $kT$, convolution with the Dirac comb corresponds to replication or periodic summation: $(\operatorname {\text{Ш}} _{\ {\frac {1}{T}}}\!*X)(f)=\!\sum _{k=-\infty }^{\infty }\!\!X\!\left(f-{\frac {k}{T}}\right)$ This leads to a natural formulation of the Nyquist–Shannon sampling theorem. If the spectrum of the function $x$ contains no frequencies higher than B (i.e., its spectrum is nonzero only in the interval $(-B,B)$) then samples of the original function at intervals ${\tfrac {1}{2B}}$ are sufficient to reconstruct the original signal. It suffices to multiply the spectrum of the sampled function by a suitable rectangle function, which is equivalent to applying a brick-wall lowpass filter. $\operatorname {\text{Ш}} _{\ \!{\frac {1}{2B}}}x\ \ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \ 2B\,\operatorname {\text{Ш}} _{\ 2B}*X$ ${\frac {1}{2B}}\Pi \left({\frac {f}{2B}}\right)(2B\,\operatorname {\text{Ш}} _{\ 2B}*X)=X$ In time domain, this "multiplication with the rect function" is equivalent to "convolution with the sinc function" (Woodward 1953, p.33-34). Hence, it restores the original function from its samples. This is known as the Whittaker–Shannon interpolation formula. Remark: Most rigorously, multiplication of the rect function with a generalized function, such as the Dirac comb, fails. This is due to undetermined outcomes of the multiplication product at the interval boundaries. As a workaround, one uses a Lighthill unitary function instead of the rect function. It is smooth at the interval boundaries, hence it yields determined multiplication products everywhere, see Lighthill 1958, p.62, Theorem 22 for details. Use in directional statistics In directional statistics, the Dirac comb of period $2\pi $ is equivalent to a wrapped Dirac delta function and is the analog of the Dirac delta function in linear statistics. In linear statistics, the random variable $(x)$ is usually distributed over the real-number line, or some subset thereof, and the probability density of $x$ is a function whose domain is the set of real numbers, and whose integral from $-\infty $ to $+\infty $ is unity. In directional statistics, the random variable $(\theta )$ is distributed over the unit circle, and the probability density of $\theta $ is a function whose domain is some interval of the real numbers of length $2\pi $ and whose integral over that interval is unity. Just as the integral of the product of a Dirac delta function with an arbitrary function over the real-number line yields the value of that function at zero, so the integral of the product of a Dirac comb of period $2\pi $ with an arbitrary function of period $2\pi $ over the unit circle yields the value of that function at zero. See also • Comb filter • Frequency comb • Poisson summation formula References 1. "The Dirac Comb and its Fourier Transform - DSPIllustrations.com". dspillustrations.com. Retrieved 2022-06-28. 2. Schwartz, L. (1951), Théorie des distributions, vol. Tome I, Tome II, Hermann, Paris 3. Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 0-8493-8273-4 4. Bracewell, R. N. (1986), The Fourier Transform and Its Applications (revised ed.), McGraw-Hill; 1st ed. 1965, 2nd ed. 1978. 5. Rahman, M. (2011), Applications of Fourier Transforms to Generalized Functions, WIT Press Southampton, Boston, ISBN 978-1-84564-564-9. Further reading • Brandwood, D. (2003), Fourier Transforms in Radar and Signal Processing, Artech House, Boston, London. • Córdoba, A (1989), "Dirac combs", Letters in Mathematical Physics, 17 (3): 191–196, Bibcode:1989LMaPh..17..191C, doi:10.1007/BF00401584, S2CID 189883287 • Woodward, P. M. (1953), Probability and Information Theory, with Applications to Radar, Pergamon Press, Oxford, London, Edinburgh, New York, Paris, Frankfurt. • Lighthill, M.J. (1958), An Introduction to Fourier Analysis and Generalized Functions, Cambridge University Press, Cambridge, U.K.. Probability distributions (list) Discrete univariate with finite support • Benford • Bernoulli • beta-binomial • binomial • categorical • hypergeometric • negative • Poisson binomial • Rademacher • soliton • discrete uniform • Zipf • Zipf–Mandelbrot with infinite support • beta negative binomial • Borel • Conway–Maxwell–Poisson • discrete phase-type • Delaporte • extended negative binomial • Flory–Schulz • Gauss–Kuzmin • geometric • logarithmic • mixed Poisson • negative binomial • Panjer • parabolic fractal • Poisson • Skellam • Yule–Simon • zeta Continuous univariate supported on a bounded interval • arcsine • ARGUS • Balding–Nichols • Bates • beta • beta rectangular • continuous Bernoulli • Irwin–Hall • Kumaraswamy • logit-normal • noncentral beta • PERT • raised cosine • reciprocal • triangular • U-quadratic • uniform • Wigner semicircle supported on a semi-infinite interval • Benini • Benktander 1st kind • Benktander 2nd kind • beta prime • Burr • chi • chi-squared • noncentral • inverse • scaled • Dagum • Davis • Erlang • hyper • exponential • hyperexponential • hypoexponential • logarithmic • F • noncentral • folded normal • Fréchet • gamma • generalized • inverse • gamma/Gompertz • Gompertz • shifted • half-logistic • half-normal • Hotelling's T-squared • inverse Gaussian • generalized • Kolmogorov • Lévy • log-Cauchy • log-Laplace • log-logistic • log-normal • log-t • Lomax • matrix-exponential • Maxwell–Boltzmann • Maxwell–Jüttner • Mittag-Leffler • Nakagami • Pareto • phase-type • Poly-Weibull • Rayleigh • relativistic Breit–Wigner • Rice • truncated normal • type-2 Gumbel • Weibull • discrete • Wilks's lambda supported on the whole real line • Cauchy • exponential power • Fisher's z • Kaniadakis κ-Gaussian • Gaussian q • generalized normal • generalized hyperbolic • geometric stable • Gumbel • Holtsmark • hyperbolic secant • Johnson's SU • Landau • Laplace • asymmetric • logistic • noncentral t • normal (Gaussian) • normal-inverse Gaussian • skew normal • slash • stable • Student's t • Tracy–Widom • variance-gamma • Voigt with support whose type varies • generalized chi-squared • generalized extreme value • generalized Pareto • Marchenko–Pastur • Kaniadakis κ-exponential • Kaniadakis κ-Gamma • Kaniadakis κ-Weibull • Kaniadakis κ-Logistic • Kaniadakis κ-Erlang • q-exponential • q-Gaussian • q-Weibull • shifted log-logistic • Tukey lambda Mixed univariate continuous- discrete • Rectified Gaussian Multivariate (joint) • Discrete: • Ewens • multinomial • Dirichlet • negative • Continuous: • Dirichlet • generalized • multivariate Laplace • multivariate normal • multivariate stable • multivariate t • normal-gamma • inverse • Matrix-valued: • LKJ • matrix normal • matrix t • matrix gamma • inverse • Wishart • normal • inverse • normal-inverse • complex Directional Univariate (circular) directional Circular uniform univariate von Mises wrapped normal wrapped Cauchy wrapped exponential wrapped asymmetric Laplace wrapped Lévy Bivariate (spherical) Kent Bivariate (toroidal) bivariate von Mises Multivariate von Mises–Fisher Bingham Degenerate and singular Degenerate Dirac delta function Singular Cantor Families • Circular • compound Poisson • elliptical • exponential • natural exponential • location–scale • maximum entropy • mixture • Pearson • Tweedie • wrapped • Category • Commons
Wikipedia
Sampling in order In statistics, some Monte Carlo methods require independent observations in a sample to be drawn from a one-dimensional distribution in sorted order. In other words, all n order statistics are needed from the n observations in a sample. The naive method performs a sort and takes O(n log n) time. There are also O(n) algorithms which are better suited for large n. The special case of drawing n sorted observations from the uniform distribution on [0,1] is equivalent to drawing from the uniform distribution on an n-dimensional simplex; this task is a part of sequential importance resampling. Further reading • Bentley, Jon Louis; Saxe, James B. (1979), "Generating sorted lists of random numbers", Computer Science Department, Paper 2450, retrieved January 4, 2014 • Gerontidis, I.; Smith, R. L. (1982), "Monte Carlo Generation of Order Statistics from General Distributions", Journal of the Royal Statistical Society. Series C (Applied Statistics), 31 (3): 238–243, JSTOR 2347997 • Lurie, D.; Hartley, H. O. (1972), "Machine-Generation of Order Statistics for Monte Carlo Computations", The American Statistician, 26 (1): 26–27, doi:10.1080/00031305.1972.10477319 • Ripley, Brian D. (1987), Stochastic Simulation, Wiley, pp. 96–98, ISBN 0-471-81884-4
Wikipedia
Sampling probability In statistics, in the theory relating to sampling from finite populations, the sampling probability (also known as inclusion probability) of an element or member of the population, is its probability of becoming part of the sample during the drawing of a single sample.[1] For example, in simple random sampling the probability of a particular unit $i$ to be selected into the sample is $p_{i}={\frac {\binom {N-1}{n-1}}{\binom {N}{n}}}={\frac {n}{N}}$ where $n$ is the sample size and $N$ is the population size.[2] Each element of the population may have a different probability of being included in the sample. The inclusion probability is also termed the "first-order inclusion probability" to distinguish it from the "second-order inclusion probability", i.e. the probability of including a pair of elements. Generally, the first-order inclusion probability of the ith element of the population is denoted by the symbol πi and the second-order inclusion probability that a pair consisting of the ith and jth element of the population that is sampled is included in a sample during the drawing of a single sample is denoted by πij.[3] See also • Sampling bias • Sampling design • Sampling frame References 1. Dodge, Y. (2003). The Oxford Dictionary of Statistical Terms. OUP. ISBN 0-19-850994-4. 2. Baddeley, Adrian; Vedel Jensen, Eva B. (2004). Stereology for Statisticians. p. 334. 3. Sarndal; Swenson; Wretman (1992). Model Assisted Survey Sampling. Springer-Verlag. ISBN 0-387-40620-4.
Wikipedia
Sampling error In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. It can produced biased results. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error.[1] For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country. Since sampling is almost always done to estimate population parameters that are unknown, by definition exact measurement of the sampling errors will not be possible; however they can often be estimated, either by general methods such as bootstrapping, or by specific methods incorporating some assumptions (or guesses) regarding the true population distribution and parameters thereof. Description Sampling Error The sampling error is the error caused by observing a sample instead of the whole population.[1] The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter.[2] Effective Sampling In statistics, a truly random sample means selecting individuals from a population with an equivalent probability; in other words, picking individuals from a group without bias. Failing to do this correctly will result in a sampling bias, which can dramatically increase the sample error in a systematic way. For example, attempting to measure the average height of the entire human population of the Earth, but measuring a sample only from one country, could result in a large over- or under-estimation. In reality, obtaining an unbiased sample can be difficult as many parameters (in this example, country, age, gender, and so on) may strongly bias the estimator and it must be ensured that none of these factors play a part in the selection process. Even in a perfectly non-biased sample, the sample error will still exist due to the remaining statistical component; consider that measuring only two or three individuals and taking the average would produce a wildly varying result each time. The likely size of the sampling error can generally be reduced by taking a larger sample.[3] Sample Size Determination The cost of increasing a sample size may be prohibitive in reality. Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample. Bootstrapping and Standard Error Main article: Bootstrapping (statistics) As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation.[1] By comparing many samples, or splitting a larger sample up into smaller ones (potentially with overlap), the spread of the resulting sample statistics can be used to estimate the standard error on the sample. In Genetics The term "sampling error" has also been used in a related but fundamentally different sense in the field of genetics; for example in the bottleneck effect or founder effect, when natural disasters or migrations dramatically reduce the size of a population, resulting in a smaller population that may or may not fairly represent the original one. This is a source of genetic drift, as certain alleles become more or less common), and has been referred to as "sampling error",[4] despite not being an "error" in the statistical sense. See also • Margin of error • Propagation of uncertainty • Ratio estimator • Sampling (statistics) References Wikimedia Commons has media related to Sampling error. 1. Sarndal, Swenson, and Wretman (1992), Model Assisted Survey Sampling, Springer-Verlag, ISBN 0-387-40620-4 2. Burns, N.; Grove, S. K. (2009). The Practice of Nursing Research: Appraisal, Synthesis, and Generation of Evidence (6th ed.). St. Louis, MO: Saunders Elsevier. ISBN 978-1-4557-0736-2. 3. Scheuren, Fritz (2005). "What is a Margin of Error?". What is a Survey? (PDF). Washington, D.C.: American Statistical Association. Archived from the original (PDF) on 2013-03-12. Retrieved 2008-01-08. 4. Campbell, Neil A.; Reece, Jane B. (2002). Biology. Benjamin Cummings. pp. 450–451. ISBN 0-536-68045-0.
Wikipedia
Samuel C. Bradford Samuel Clement Bradford (10 January 1878 in London – 13 November 1948) was a British mathematician, librarian and documentalist at the Science Museum in London. He developed "Bradford's law" (or the "law of scattering") regarding differences in demand for scientific journals. This work influences bibliometrics and citation analysis of scientific publications. Bradford founded the British Society for International Bibliography (BSIB) (est. 1927) and he was elected president of International Federation for Information and Documentation (FID) in 1945. Bradford was a strong proponent of the UDC and of establishing abstracts of the scientific literature. Samuel Clement Bradford Born(1878-01-10)10 January 1878 London, England Died15 November 1948(1948-11-15) (aged 70) United Kingdom Known forThe Bradford's law Scientific career FieldsMathematics Bibliography • Bradford, S. C. (1934). Sources of information on specific subjects. Engineering, 26, p. 85–86. • Bradford, S. C. (1946). Romance of Roses. London: F. Muller. • Bradford, S. C. (1948). Documentation. London: Crosby Lockwood. • Bradford, S. C. (1953). Documentation. 2nd ed. With an introd. by Jesse H. Shera and Margaret E. Egan. London: Crosby Lockwood. References • Gosset, M. & Urquhart, D. J. (1977). S. C. Bradford, Keeper of the Science Museum Library 1925-1937. Journal of Documentation, 33, 173-179. Authority control International • FAST • ISNI • VIAF • 2 National • Germany • Israel • United States • Czech Republic • Netherlands • Poland People • Trove Other • IdRef
Wikipedia
Samuel Dunn (mathematician) Samuel Dunn (1723 - 1794) was a British mathematician,[1][2] teacher, cartographer and amateur astronomer. Samuel Dunn Born1723 Crediton Died1794 London Burial placeLondon NationalityBritish Occupations • Mathematician • teacher • cartographer • astronomer SpouseElizabeth Harrison (married 1763) Biography Early life He was born to John and Alice Dunn in Crediton, Devonshire, and baptised there on 7 February 1723. His father died at Crediton in 1744. Samuel Dunn wrote in his will: In 1743, when the first great fire broke out and destroyed the west town, I had been some time keeping a school and teaching writing, accounts, navigation, and other mathematical science, although not above twenty years of age; then I moved to the schoolhouse at the foot of Bowdown [now Bowden] Hill, and taught there till Christmas 1751, when I came to London.[1] The schoolhouse was the place where the "English school" was kept previously to its union with the blue school in 1821. Life and Career in London Dunn moved to London in December 1751, where he taught in different schools, and gave private lessons.[1] In 1757, he came before the public as the inventor of the "universal planispheres, or terrestrial and celestial globes in plano", four large stereographical maps, with a transparent index placed over each map, whereby the circles of the sphere are instantaneously projected on the plane of the meridian for any latitude, and the problems of geography, astronomy, and navigation wrought with the same certainty and ease as by the globes themselves, without the help of scale and compasses, pen and ink.[1] He published an account of their Description and Use, 2nd edition, octavo, London, 1759. From the preface, it appears that in 1758 Dunn had become master of an academy "for boarding and qualifying young gentlemen in arts, sciences, and languages, and for business", at Chelsea. It was the Maritime Academy, at Ormond House, Paradise Row[3] where there was a good observatory.[1] On 1 January 1760, he made the observation of a remarkable comet.[4] Other discoveries he communicated to the Royal Society; between 1761 and 1771, Dunn contributed nine papers to the Philosophical Transactions of the Royal Society, of which body, however, he was not a fellow.[5][6][7][8][9][10][11][12][13] On the title-page of his Atlas he appears as a member of the Philosophical Society at Philadelphia, America. A few of his letters to the historian Thomas Birch are preserved,[14] and one to the botanist Emanuel Mendes da Costa.[15] Dunn married Elizabeth Harrison in 1763. Towards the close of 1763, he gave up the school at Chelsea, and fixing himself at Brompton Park, near Kensington, resumed once more his private teaching. In 1764 he made a short tour through France.[16] In 1774, when residing at 6 Clement's Inn, near Temple Bar, he published his excellent New Atlas of the Mundane System, or of Geography and Cosmography, describing the Heavens and the Earth. … The whole elegantly engraved on sixty-two copper plates. With a general introduction, folio, London (second and third editions of which appeared in 1788 and 1789, respectively). About this time his reputation led to his being appointed mathematical examiner of the candidates for the East India Company's service.[1] Under the company's auspices he was enabled to publish in a handsome form several of his more important works. Such were:[1] 1. A New and General Introduction to Practical Astronomy, with its application to Geography … Topography, octavo, London, 1774. 2. The Navigators Guide to the Oriental or Indian Seas, or the Description and Use of a Variation Chart of the Magnetic Needle, designed for shewing the Longitude throughout the principal parts of the Atlantic, Ethiopic, and Southern Oceans, octavo, London (1775). 3. A New Epitome of Practical Navigation, or Guide to the Indian Seas, containing (1) the Elements of Mathematical Learning, used … in the Theory and Practice of Nautical affairs; (2) the Theory of Navigation. ..; (3) the Method of Correcting and Determining the Longitude at Sea …; (4) the Practice of Navigation in all kinds of Sailing (with copper plates), octavo, London, 1777, and 4. The Theory and Practice of the Longitude at Sea … with copper plates, octavo, London, 1778; second edition, enlarged, quarto, London, 1786.[17] He also "methodised, corrected, and further enlarged" a goodly quarto, entitled A New Directory for the East Indies … being a work originally begun upon the plan of the Oriental Neptune, augmented and improved by Mr. Willm. Herbert, Mr. Willm. Nichelson, and others, fifth edition, London, 1780, with a sixth edition following in 1791. Dunn was living at 8 Maiden Lane, Covent Garden, in July 1777, but by September 1780 had taken up his abode at 1 Boar's Head Court, Fleet Street, where he continued for the remainder of his life.[18] Death and legacy He died in January 1794. His will, dated 5 January 1794, was proved at London, on 20 January by his kinsman, William Dunn, officer of excise of London (registered in P.C.C., 16, Holman).[19] Therein he describes himself as "teacher of the mathematics and master for the longitude at sea", and desires to be buried "in the parish church belonging to the place where I shall happen to inhabit a little time before my decease". He names seven relations to whom he left £20 each; but to his wife, Elizabeth Dunn, "who hath withdrawn herself from me near thirty years, the sum only of ten pounds". No children are mentioned.[18] His library and instruments were sold at auction.[20] He also requested the corporation of Crediton to provide always and have a master of the school at the foot of Bowden Hill residing therein, of the church of England, but not in holy orders, an able teacher of writing, navigation, the lunar method of taking the longitude at sea, planning, drawing, and surveying, with all mathematical science. For this purpose he left £30 a year. Six boys were to be taught, with a preference to his own descendants. The stock thus bequeathed produced in 1823 dividends amounting to £25 4/- per annum, the school being known by the name of Dunn's School.[21] Published works Besides the seven works mentioned above and his many maps and charts, he also published the following (based on Goodwin (1888), with corrections and additions from modern library catalogues):[22] • A Popular Lecture on the Astronomy and Philosophy of Comets, octavo, London, 1759. • Improvements in the Doctrine of the Sphere, Astronomy, Geography, Navigation, &c. Deduced from the Figure and Motion of the Earth; and Absolutely Necessary to be Applied in Finding the True Longitude at Sea and Land, quarto, London, 1765. • A Determination of the exact Moments of Time when the Planet Venus was at external and internal contact with the Sun's Limb, in the Transits of 6 June 1761 and 3 June 1769, quarto, London, 1770. • An Introduction to the Theory and Use of the Pantographer; As Made and Improved by Thomas Newman, (Successor to Mess. Heath and Wing,) Mathematical Instrument Maker in Exeter Change, London, 1774. • A New Atlas of Variations of the Magnetic Needle for the Atlantic, Ethiopic, Southern and Indian Oceans; drawn from a theory of the magnetic system, London, 1776. • The Description and Use of a New and Easy Formula, for determining the time of the day, the azimuth of the sun, and the latitude, London, 1777. • A New and Easy Method of finding the Latitude on Sea or Land, octavo, London, 1778. • Nautical Propositions and Institutes; or Directions for the Practice of Navigation, octavo, London, 1781. • An Introduction to Latitude, without Meridian Altitudes; and Longitude, at Sea; having Contemporary Observations: with Astronomical Delineations and Nautical Formulas, engraved on copper plates, octavo, London, 1782. • The Linear Tables described, and their utility verified, octavo, London, 1783. • Lunar Tables, Nos. 1–5, folio, London, 1783. • A new Formula for Latitude, s. sh. quarto, London, 1784. Engraved. • Formulas for all parts of Navigation, having the Tables of Logarithms, s. sh. quarto, London, 1784. Engraved. • General Magnetic and True Journal at Sea, s. sh. quarto, London, 1784. Engraved. • Magnetic and true Journal at Sea, s. sh. quarto, London, 1784. Engraved. (Another edition, s. sh. quarto, London, 22 September 1784. Engraved.). • Rules for a Ship's Journal at Sea, s. sh. folio, London, 1784. Engraved. • Ship's Journal at Sea, s. sh. quarto, London, 1784. Engraved. • A Table for Transverses and Currents, s. sh. quarto, London, 1784. • Tables of Correct and Concise Logarithms … with a compendious Introduction to Logarithmetic, octavo, London, 1784. • Precepts, Formulas, Tables, Charts, and Improvements, London, 1784. • Nautic Tables, octavo, London, 1785. • Tables of Time and Degrees, and hourly change of the Suns right Ascension', s. sh. quarto, London, 1786. • A Description of peculiar Charts and Tables for facilitating a Discovery of both the Latitude and Longitude in a Ship at Sea, folio, London, 1787. • Linear Tables, one, two, three, four, and five, abridged, &c. (Linear Tables viii. ix. of Proper Logarithms. Linear Tables x. xi.) 3 plates, folio, London, 1788. • Linear Table xvi. for showing the Suns Declination. (Errata in the reductions.)' folio, London, 1788. • The Lunar Method Shortend in Calculation & Improv'd. (Short Rules for practical navigation.)' octavo, London, 1788. • A Navigation Table for shortening days works, s. sh. folio, London, 1788. • The Longitude Journal; its description and application, folio, London, 1789. • The Sea-Journal improved, with its description, &c., folio, London, 1789. • The Daily Uses of Nautical Sciences in a Ship at Sea, particularly in finding and keeping the Latitude and Longitude during a voyage, octavo, London, 1790. • An Introduction to the Lunar Method of Finding the Longitude in a Ship at Sea, &c., octavo, London, 1790. • A New Directory for the East Indies, 6th edition, London, 1791. • The Astronomy of Fixed Stars, concisely deduced from original principles, and prepared for application to Geography and Navigation, Part I., quarto, London, 1792. • Improvements in the Methods now in use for taking the Longitude of a Ship at Sea. Invented and described by S. Dunn, octavo, London, 1793. • The Longitude Logarithms; in their Regular and Shortest Order, made easy for use in taking the Latitude and Longitude, at Sea and Land, octavo, London, 1793 (British Museum Cat.; Watt, '"Bibl. Brit"'. i. 324 f.). Citations 1. Goodwin 1888, p. 210. 2. "The Oxford Dictionary of National Biography". Oxford Dictionary of National Biography (online ed.). Oxford University Press. 2004. doi:10.1093/ref:odnb/8281. (Subscription or UK public library membership required.) 3. Goodwin 1888, p. 210 cites: Faulkner, '"Chelsea"', ed. 1829, ii. 211. 4. Goodwin 1888, p. 210 cites: Ann. Reg. iii. 65. 5. "XXXV. Some observations of the planet Venus, on the disk of' the Sun, June 6th, 1761; with a preceding account of the method taken for verifying the time of that phœnomenon; and certain reasons for an atmosphere about Venus". Philosophical Transactions of the Royal Society of London. 52: 184–195. 1761. doi:10.1098/rstl.1761.0036. S2CID 186215035. 6. "LXXII. An attempt to assign the cause, why the sun and moon appear to the naked eye larger when they are near the horizon. With an account of several natural phœnomena, relative to this subject". Philosophical Transactions of the Royal Society of London. 52: 462–473. 1761. doi:10.1098/rstl.1761.0074. S2CID 186213015. 7. "XCIV. Certain reasons for a lunar atmosphere". Philosophical Transactions of the Royal Society of London. 52: 578–580. 1761. doi:10.1098/rstl.1761.0096. S2CID 186211644. 8. "CIV. An account of the eclipse of the Sun, October 16, 1762, in a letter from Mr. Samuel Dunn, to Mr. James Short, M. A. And F. R. S". Philosophical Transactions of the Royal Society of London. 52: 644–646. 1761. doi:10.1098/rstl.1761.0106. S2CID 186208654. 9. "IX. An account of an appulse of the Moon to the planet Jupiter, observed at Chelsea". Philosophical Transactions of the Royal Society of London. 53: 31. 1763. doi:10.1098/rstl.1763.0010. S2CID 186209150. 10. "XVIII. Remarks on the censure of mercator's chart, in a posthumous work of Mr. West, of Exeter: In a letter to Thomas Birch, D. D. Secretary to the Royal Society, from Mr. Samuel Dunn". Philosophical Transactions of the Royal Society of London. 53: 66–68. 1763. doi:10.1098/rstl.1763.0019. S2CID 186208456. 11. "XLIX. An account of a remarkable meteor: In a letter to the Reverend Thomas Birch, D. D. Secret. Of R. S. From Mr. Samuel Dunn". Philosophical Transactions of the Royal Society of London. 53: 351–352. 1763. doi:10.1098/rstl.1763.0050. S2CID 186210146. 12. "XX. Observations on the eclipse of the sun, April 1, 1764, at Brompton-Park". Philosophical Transactions of the Royal Society of London. 54: 114–117. 1764. doi:10.1098/rstl.1764.0022. S2CID 186214985. 13. "IX. A determination of the exact moments of time when the planet Venus was at external and internal contact with the Sun's limb, in the transits of June 6th, 1761, and June 3d, 1769, by Samuel Dunn". Philosophical Transactions of the Royal Society of London. 60: 65–73. 1771. doi:10.1098/rstl.1770.0009. S2CID 186210827. 14. Goodwin 1888, p. 211 notes it is in Addit. manuscript 4305, following 85–90. 15. Goodwin 1888, p. 211 notes it is in Addit. manuscript 28536, f. 241. 16. Goodwin 1888, p. 210 cites: Addit. MS. 28536, f. 241. 17. Goodwin 1888, p. 210-211. 18. Goodwin 1888, p. 211. 19. "Will of Samuel Dunn". GENUKI. 20 January 1794. Retrieved 27 March 2021. 20. Auction catalogue of Samuel Dunn's library and instruments "Samuel Dunn, mathematician, 1794". History of Science Museum. Leigh and Sotheby. 10 April 1794. Retrieved 27 March 2021. 21. Goodwin 1888, p. 211 cites: Tenth Report of Charities Commissioners, 28 June 1823, pages 78–9; Lysons, Magna Britannia, volume vi. (Devonshire) part ii. page 150. 22. Goodwin 1888, p. 212. References Attribution •  This article incorporates text from a publication now in the public domain: Goodwin, Gordon (1888). "Dunn, Samuel (d.1794)". In Stephen, Leslie (ed.). Dictionary of National Biography. Vol. 16. London: Smith, Elder & Co. pp. 211–213. External links • Heard, Nick (30 December 2016). "Samuel Dunn 1723-1794". The Heard Family of Mid-Devon. Retrieved 27 March 2021. Authority control International • ISNI • VIAF National • Spain • Germany • Israel • Belgium • United States • Sweden • Netherlands • Poland Other • SNAC • IdRef
Wikipedia
Samuel Earnshaw Samuel Earnshaw (1 February 1805, Sheffield, Yorkshire – 6 December 1888, Sheffield, Yorkshire[1]) was an English clergyman and mathematician and physicist, noted for his contributions to theoretical physics, especially "Earnshaw's theorem". Samuel Earnshaw Born(1805-02-01)1 February 1805 Sheffield, Yorkshire, England Died6 December 1888(1888-12-06) (aged 83) Sheffield, Yorkshire, England Known forEarnshaw's theorem Earnshaw was born in Sheffield and entered St John's College, Cambridge, graduating Senior Wrangler and Smith's Prizeman in 1831.[2] From 1831 to 1847 Earnshaw worked in Cambridge as tripos coach, and in 1846 was appointed to the parish church St. Michael, Cambridge. For a time he acted as curate to the Revd Charles Simeon. In 1847 his health broke down and he returned to Sheffield working as a chaplain and teacher. Earnshaw published several mathematical and physical articles and books. His most famous contribution, "Earnshaw's theorem", shows the impossibility of stable levitating permanent magnets: other topics included optics, waves, dynamics and acoustics in physics, calculus, trigonometry and partial differential equations in mathematics. As a clergyman, he published several sermons and treatises. See also • Cotes's spiral References 1. GRO Register of Deaths: DEC 1888 9c 246 ECCLESALL B. (aged 83) 2. "Samuel Earnshaw (ENSW827S)". A Cambridge Alumni Database. University of Cambridge. External links • Samuel Earnshaw Authority control International • VIAF Other • IdRef
Wikipedia
Samuel Hawksley Burbury Samuel Hawksley Burbury, FRS (18 May 1831 – 18 August 1911) was a British mathematician. Life He was born on 18 May 1831 at Kenilworth, the only son of Samuel Burbury of Clarendon Square, Leamington, by Helen his wife. [1] He was educated at Shrewsbury School (1848–1850), where he was head boy, and at St. John's College, Cambridge. At the university he won exceptional distinction in both classics and mathematics. He was twice Person prizeman (1852 and 1853), Craven university scholar (1853), and chancellor's classical medallist (1854). He graduated B.A. as fifteenth wrangler and second classic in 1854, becoming fellow of his college in the same year; he proceeded M.A. in 1857. [1] On 6 Oct. 1855 he entered as a student at Lincoln's Inn, and was called to the bar on 7 June 1858. From 1860 he practised at the parliamentary bar; but increasing deafness compelled him to take chamber practice only, from which he retired in 1908. He was elected F.R.S. on 5 June 1890. He died on 18 August 1911 at his residence, 15 Melbury Road, London, W., and was buried at Kensal Green. [1] Contributions While engaged in legal work Burbury pursued with much success advanced mathematical study, chiefly in collaboration with his Cambridge friend, Henry William Watson. Together they wrote the treatises, The Application of Generalised Co-ordinates to the Kinetics of a Material System (Oxford, 1879) and The Mathematical Theory of Electricity and Magnetism (2 vols. Oxford, 1885—9), in which the endeavour was made to carry on the researches of Clerk Maxwell and to place electrostatics and electromagnetism on a more formal mathematical basis. [1] Among many papers which Burbury contributed independently to the 'Philosophical Magazine' were those 'On the Second Law of Thermodynamics, in Connection with the Kinetic Theory of Gases' (1876) and 'On a Theorem in the Dissipation of Energy' (1882). Family Burbury married on 12 April 1860 Alice Ann, eldest daughter of Thomas Edward Taylor, J.P., of Dodworth Hall, Barnsley, Yorkshire, and had issue four sons and two daughters. A portrait of Burbury by William E. Miller (1884) is in the possession of his widow." [1] References 1. Owen 1912. Attribution  This article incorporates text from a publication now in the public domain: Owen, D. J. (1912). "Burbury, Samuel Hawksley". In Lee, Sidney (ed.). Dictionary of National Biography (2nd supplement). Vol. 1. London: Smith, Elder & Co. External links • Works by or about Samuel Hawksley Burbury at Internet Archive Authority control International • ISNI • VIAF • WorldCat National • Israel • United States • Australia • Netherlands Academics • CiNii • Scopus • zbMATH Other • IdRef
Wikipedia
Samuel Lattès Samuel Lattès (21 February 1873 (Nice) – 5 July 1918) was a French mathematician.[1] From 1892 to 1895 he studied at the École Normale Superieure. After this he was a teacher in Algiers, Dijon and Nice. After a promotion to Paris in 1906 he moved first to Montpellier in 1908 and then to Besançon, before he took up a professorship at the University of Toulouse in 1911. He died of typhus in 1918. Today Lattès is best known for his work on complex sets, particularly for examples of rational functions including the Julia set and the Riemann sphere. Today these are described as Lattès maps or Lattès examples.[2] See also • Pierre Fatou • Gaston Julia • Lattès map Bibliography • Adolphe Buhl: Éloge des Samuel Lattès. Mémoires de l'Academie des Sciences, Inscriptions et Belles-Lettres de Toulouse, Band 9, 1921, S. 1–13. • Michèle Audin (2009), Fatou, Julia, Montel, le grand prix des sciences mathématiques de 1918, et après … (in French), Springer, ISBN 978-3-642-00445-2; English translation: Michèle Audin (2011), Fatou, Julia, Montel, The Great Prize of Mathematical Sciences of 1918, and Beyond (in German), Springer, ISBN 978-3-642-17853-5 References 1. Daniel S. Alexander (29 June 2013). A History of Complex Dynamics: From Schröder to Fatou and Julia. Springer Science & Business Media. p. 54. ISBN 978-3-663-09197-4. 2. Für eine moderne Darstellung der Lattèsschen Beispiele und neuere Ergebnisse dazu siehe: John Milnor, On Lattès maps. In Dynamics on the Riemann sphere, European Mathematical Society, Zürich, 2006, S. 9–43 Authority control International • ISNI • VIAF National • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
Samuel Marolois Samuel Marolois (c. 1572 – before 1627) was a Dutch mathematician and military engineer who is best known for his work on perspective. Life and work Marolois (or Marlois) was born c. 1572 in the Dutch Republic (possibly in The Hague) as son of Nicolas Marolois, a Protestant native of Valenciennes who had been exiled from France and served the Prince of Orange.[1][2][3] Marolois became a mathematician and earthworks engineer in the employ of Maurice, Prince of Orange[2][3] He was married to Hester le Maire, which made him a brother-in-law of the Amsterdam merchants Thomas le Maire and Pieter le Fevre. In March 1611, he bought a house in The Hague.[1] After the death of Ludolph van Ceulen, Marolois attempted unsuccessfully to succeed him as Chair of Mathematics in Leiden.[1][2] Marolois wrote a book on perspective, La perspective contenant la theorie et la practique d'icelle, which was published in 1614 and printed many times in other languages including Dutch, German and Latin.[3] The book had both theoretical and practical elements.[4] The theoretical parts were mostly taken from the works of Guidobaldo del Monte, while the practical parts included many examples. In total, 275 figures are printed in the book.[5] While Marolois' work contributed little to the mathematical theory of perspective, his book was influential in spreading awareness of the ideas. The artist Joshua Kirby later claimed it was one of the most important early books on perspective.[6] Marolois was a military adviser to the Dutch Republic 1612–1619. His Fortification ou architecture militaire described the cheapest way to build fortifications.[7] It was the first systematic treatment of the Dutch system of fortifications, using geometric operations to draw polygonal plans, and is famous for the drawing of the citadel of Coevorden.[2] Marolois died in The Hague before 1627.[1][2] His books were edited by Albert Girard.[8] References 1. de Waard 1912. 2. Goudeau 2015. 3. Andersen 2008, p. 297. 4. Andersen 2008, p. 298. 5. Andersen 2008, p. 299. 6. Andersen 2008, p. 309. 7. Parker 1976, p. 60. 8. Desargues 1987, p. 25. Bibliography • Andersen, Kirsti (23 November 2008). The Geometry of an Art: The History of the Mathematical Theory of Perspective from Alberti to Monge. Springer Science & Business Media. doi:10.1007/978-0-387-48946-9. ISBN 978-0-387-48946-9. • Desargues, Gérard (1987). The geometrical work of Girard Desargues. New York : Springer-Verlag. ISBN 978-0-387-96403-4. • Goudeau, Jeroen (2015). "Architectura – Les livres d'Architecture". architectura.cesr.univ-tours.fr (in French). Retrieved 10 March 2022. • de Waard, C. (1912). "Marolois (Samuel)". Nieuw Nederlandsch Biografisch Woordenboek (NNBW) (in Dutch). Vol. 2. col. 873–875. • Parker, Geoffrey (1976). "Why Did the Dutch Revolt Last Eighty Years?". Transactions of the Royal Historical Society. 26: 53–72. doi:10.2307/3679072. ISSN 0080-4401. JSTOR 3679072. S2CID 161522209. Authority control International • ISNI • VIAF National • Norway • Spain • France • BnF data • Catalonia • Germany • Italy • Belgium • United States • Sweden • Czech Republic • Netherlands • Portugal People • Netherlands Other • IdRef
Wikipedia
Samuel Merrill III Samuel Merrill III (born 1939) is an American mathematician and political scientist best known for his work on alternative voting systems, voter behavior, party competition, and arbitration.[1][2][3] Merrill was raised in Bogalusa, Louisiana. He received his bachelor's degree from Tulane University and his Ph.D. in mathematics in 1965 from Yale University under C. E. Rickart with thesis Banach Spaces of Analytic Functions.[4] Merrill was a professor of mathematics and statistics at Wilkes University until he retired in 2004. Merrill's son, Andrew Merrill, is a computer science teacher at Catlin Gabel School, in Portland, Oregon. Samuel Merrill is the author of three books on political science: • Making Multicandidate Elections More Democratic (1988, Princeton University Press) • A Unified Theory of Voting with Bernard Grofman (1999, Cambridge University Press) • A Unified Theory of Party Competition with James Adams and Bernard Grofman (2005, Cambridge University Press) References 1. Sawyer, Kathy (October 9, 1995). "A Paradox Of Majority Politics". p. A3. Retrieved 28 April 2011. 2. Haskell, John (May 1996). Fundamentally flawed: understanding and reforming presidential primaries. Rowman & Littlefield. pp. 92–. ISBN 978-0-8476-8241-6. Retrieved 28 April 2011. 3. Farber, Daniel A.; O'Connell, Anne Joseph (May 2010). Research handbook on public choice and public law. Edward Elgar Publishing. pp. 130–. ISBN 978-1-84720-674-9. Retrieved 28 April 2011. 4. Samuel Merrill, III at the Mathematics Genealogy Project External links • Wilkes University page Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project Other • SNAC • IdRef
Wikipedia
Samuel Segun Okoya Samuel Segun Okoya (born 20 October 1958, in Zaria, Kaduna State, Nigeria) is an academic in applied mathematics at Obafemi Awolowo University. He is the editor-in-chief of the Journal and Notices of the Nigerian Mathematical Society[4] and the First Occupier of Pastor E.A Adeboye Outstanding Professor of Mathematics (Endowed Professorial Chair) University of Lagos.[5] He is the first fully bred alumnus to attain the position of professor and head of the Mathematics Department at Obafemi Awolowo University.[1] Samuel Segun Okoya Born (1958-10-20) 20 October 1958, Zaria, Kaduna State[1] NationalityNigerian[1] EducationMathematics[1] Alma materObafemi Awolowo University[1] OccupationProfessor[1] Years active1989–present Known forEditor-in-chief, Journal and Notices of the Nigerian Mathematical Society;[2] firstPastor E.A Adeboye Outstanding Professor of Mathematics, (Endowed Professorial Chair) University of Lagos.[3] SpouseAderonke A. Okoya (Nee Olubakin)[1] Children3[1] RelativesRazaq Okoya Education Okoya had his entire tertiary education at Obafemi Awolowo University. He graduated with his bachelor's degree in 1983, earned his master's degree in 1986 and Ph.D. in 1989. His PhD dissertation, supervised by Reuben O. Ayeni, was titled "A Mathematical Model for Explosions with Chain Branching and Chain Breaking Kinetics".[6] He is a Fellow of the Mathematical Association of Nigeria and Nigerian Mathematical Society. He regularly visits the Abdus Salam International Centre for Theoretical Physics, The World Academy of Sciences and International Mathematical Union.[1] References 1. "Samuel Segun Okoya an Inspirational Pedagogue and Achiever at 60". Vanguard Nigeria. 23 December 2018. Retrieved 18 February 2019. 2. "Nigerian Mathematial Society - Governing Council". Nigerian Mathematical Society. Retrieved 18 February 2019. 3. Segun, Otokiti. "First Adeboye's educational endowment boss tips Mathematics as propeller of tech revolution". World Stage. World Stage Group. Retrieved 18 February 2019. 4. "Nigerian Mathematical Society elects leaders". The Nation. 8 July 2015. Retrieved 18 February 2019. 5. Ujunwa Atueyi (25 February 2015). "Don calls for intervention in teaching, learning of mathematics". Guardian Nigeria. Retrieved 18 February 2019. 6. "Samuel Segun Okoya". Mathematics Genealogy Project. Department of Mathematics, North Dakota State University. Retrieved 18 February 2019. Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Ramanujam–Samuel theorem In algebraic geometry, the Ramanujam–Samuel theorem gives conditions for a divisor of a local ring to be principal. It was introduced independently by Samuel (1962) in answer to a question of Grothendieck and by C. P. Ramanujam in an appendix to a paper by Seshadri (1963), and was generalized by Grothendieck (1967, Theorem 21.14.1). Statement Grothendieck's version of the Ramanujam–Samuel theorem (Grothendieck & Dieudonné 1967, theorem 21.14.1) is as follows. Suppose that A is a local Noetherian ring with maximal ideal m, whose completion is integral and integrally closed, and ρ is a local homomorphism from A to a local Noetherian ring B of larger dimension such that B is formally smooth over A and the residue field of B is finite over that of A. Then a cycle of codimension 1 in Spec(B) that is principal at the point mB is principal. References • Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32: 5–361. doi:10.1007/bf02732123. MR 0238860. • Samuel, Pierre (1962), "Sur une conjecture de Grothendieck", Les Comptes rendus de l'Académie des sciences, 255: 3101–3103, MR 0154887 • Seshadri, C. S. (1963), "Quotient space by an abelian variety", Mathematische Annalen, 152: 185–194, doi:10.1007/BF01470879, ISSN 0025-5831, MR 0164973
Wikipedia
Sangaku Sangaku or san gaku (Japanese: 算額, lit. 'calculation tablet') are Japanese geometrical problems or theorems on wooden tablets which were placed as offerings at Shinto shrines or Buddhist temples during the Edo period by members of all social classes. History The sangaku were painted in color on wooden tablets (ema) and hung in the precincts of Buddhist temples and Shinto shrines as offerings to the kami and buddhas, as challenges to the congregants, or as displays of the solutions to questions. Many of these tablets were lost during the period of modernization that followed the Edo period, but around nine hundred are known to remain. Fujita Kagen (1765–1821), a Japanese mathematician of prominence, published the first collection of sangaku problems, his Shimpeki Sampo (Mathematical problems Suspended from the Temple) in 1790, and in 1806 a sequel, the Zoku Shimpeki Sampo. During this period Japan applied strict regulations to commerce and foreign relations for western countries so the tablets were created using Japanese mathematics, developed in parallel to western mathematics. For example, the connection between an integral and its derivative (the fundamental theorem of calculus) was unknown, so sangaku problems on areas and volumes were solved by expansions in infinite series and term-by-term calculation. Select examples rmiddlerleftrright 144 4936 916144 1625400 72200450 144441784 The six primitive triplets of integer radii up to 1000 • A typical problem, which is presented on an 1824 tablet in Gunma Prefecture, covers the relationship of three touching circles with a common tangent, a special case of Descartes' theorem. Given the size of the two outer large circles, what is the size of the small circle between them? The answer is: ${\frac {1}{\sqrt {r_{\text{middle}}}}}={\frac {1}{\sqrt {r_{\text{left}}}}}+{\frac {1}{\sqrt {r_{\text{right}}}}}.$ (See also Ford circle.) • Soddy's hexlet, thought previously to have been discovered in the west in 1937, had been discovered on a sangaku dating from 1822. • One sangaku problem from Sawa Masayoshi and other from Jihei Morikawa were solved only recently.[1][2] See also • Equal incircles theorem • Japanese theorem for concyclic polygons • Japanese theorem for concyclic quadrilaterals • Problem of Apollonius • Recreational mathematics • Seki Takakazu Notes 1. Holly, Jan E.; Krumm, David (2020-07-25). "Morikawa's Unsolved Problem". arXiv:2008.00922 [math.HO]. 2. Kinoshita, Hiroshi (2018). "An Unsolved Problem in the Yamaguchi's Travell Diary" (PDF). Sangaku Journal of Mathematics. 2: 43–53. References • Fukagawa, Hidetoshi, and Dan Pedoe. (1989). Japanese temple geometry problems = Sangaku. Winnipeg: Charles Babbage. ISBN 9780919611214; OCLC 474564475 • __________ and Dan Pedoe. (1991) How to resolve Japanese temple geometry problems? (日本の幾何ー何題解けますか?, Nihon no kika nan dai tokemasu ka) Tōkyō : Mori Kitashuppan. ISBN 9784627015302; OCLC 47500620 • __________ and Tony Rothman. (2008). Sacred Mathematics: Japanese Temple Geometry. Princeton: Princeton University Press. ISBN 069112745X; OCLC 181142099 • Huvent, Géry. (2008). Sangaku. Le mystère des énigmes géométriques japonaises. Paris: Dunod. ISBN 9782100520305; OCLC 470626755 • Rehmeyer, Julie, "Sacred Geometry", Science News, March 21, 2008. • Rothman, Tony; Fugakawa, Hidetoshi (May 1998). "Japanese Temple Geometry". Scientific American. pp. 84–91. External links Wikimedia Commons has media related to Sangaku. • Sangaku (Japanese votive tablets featuring mathematical puzzles) • Japanese Temple Geometry Problem • Sangaku: Reflections on the Phenomenon • Sangaku Journal of Mathematics Authority control: National • Japan
Wikipedia
The Sand Reckoner The Sand Reckoner (Greek: Ψαμμίτης, Psammites) is a work by Archimedes, an Ancient Greek mathematician of the 3rd century BC, in which he set out to determine an upper bound for the number of grains of sand that fit into the universe. In order to do this, Archimedes had to estimate the size of the universe according to the contemporary model, and invent a way to talk about extremely large numbers. The Sand Reckoner (Arenarius) AuthorArchimedes LanguageLatin GenreGoogology, Astronomy The work, also known in Latin as Arenarius, is about eight pages long in translation and is addressed to the Syracusan king Gelo II (son of Hiero II). It is considered the most accessible work of Archimedes.[1] Naming large numbers See also: Exponentiation § History of the notation Periods and orders with their intervals in modern notation[2] PeriodOrderIntervallog10 of interval 11(1, Ơ], where the unit of the second order, Ơ = 108 (0, 8] 2(Ơ, Ơ2](8, 16] ··· k(Ơk − 1, Ơk](8k − 8, 8k] ··· Ơ(ƠƠ − 1, Ƥ], where the unit of the second period, Ƥ = ƠƠ = 108×108 (8×108 − 8, 8×108] = (799,999,992, 800,000,000] 21(Ƥ, ƤƠ](8×108, 8 × (108 + 1)] = (800,000,000, 800,000,008] 2(ƤƠ, ƤƠ2](8 × (108 + 1), 8 × (108 + 2)] ··· k(ƤƠk − 1, ƤƠk](8 × (108 + k − 1), 8 × (108 + k)] ··· Ơ(ƤƠƠ − 1, ƤƠƠ] = (Ƥ2Ơ−1, Ƥ2] (8 × (2×108 − 1), 8 × (2×108)] = (1.6×109 − 8, 1.6×109] = (1,599,999,992, 1,600,000,000] ··· Ơ1(ƤƠ − 1, ƤƠ − 1Ơ] (8×108 × (108 − 1),  8 × (108 × (108 − 1) + 1)] = (79,999,999,200,000,000,     79,999,999,200,000,008] 2(ƤƠ − 1Ơ, ƤƠ − 1Ơ2](8 × (108 × (108 − 1) + 1),   8 × (108 × (108 − 1) + 2)] ··· k(ƤƠ − 1Ơk − 1, ƤƠ − 1Ơk](8 × (108 × (108 − 1) + k − 1), 8 × (108 × (108 − 1) + k)] ··· Ơ(ƤƠ − 1ƠƠ − 1, ƤƠ − 1ƠƠ] = (ƤƠƠ−1, ƤƠ] (8 × (2×108 − 1), 8 × (2×108)] = (8×1016 − 8, 8×1016] = (79,999,999,999,999,992,     80,000,000,000,000,000] First, Archimedes had to invent a system of naming large numbers. The number system in use at that time could express numbers up to a myriad (μυριάς — 10,000), and by utilizing the word myriad itself, one can immediately extend this to naming all numbers up to a myriad myriads (108).[3] Archimedes called the numbers up to 108 "first order" and called 108 itself the "unit of the second order". Multiples of this unit then became the second order, up to this unit taken a myriad-myriad times, 108·108=1016. This became the "unit of the third order", whose multiples were the third order, and so on. Archimedes continued naming numbers in this way up to a myriad-myriad times the unit of the 108-th order, i.e., (108)^(108) After having done this, Archimedes called the orders he had defined the "orders of the first period", and called the last one, $(10^{8})^{(10^{8})}$, the "unit of the second period". He then constructed the orders of the second period by taking multiples of this unit in a way analogous to the way in which the orders of the first period were constructed. Continuing in this manner, he eventually arrived at the orders of the myriad-myriadth period. The largest number named by Archimedes was the last number in this period, which is $\left(\left(10^{8}\right)^{(10^{8})}\right)^{(10^{8})}=10^{8\cdot 10^{16}}.$ Another way of describing this number is a one followed by (short scale) eighty quadrillion (80·1015) zeroes. Archimedes' system is reminiscent of a positional numeral system with base 108, which is remarkable because the ancient Greeks used a very simple system for writing numbers, which employs 27 different letters of the alphabet for the units 1 through 9, the tens 10 through 90 and the hundreds 100 through 900. Law of exponents Archimedes also discovered and proved the law of exponents, $10^{a}10^{b}=10^{a+b}$, necessary to manipulate powers of 10. Estimation of the size of the universe Archimedes then estimated an upper bound for the number of grains of sand required to fill the Universe. To do this, he used the heliocentric model of Aristarchus of Samos. The original work by Aristarchus has been lost. This work by Archimedes however is one of the few surviving references to his theory,[4] whereby the Sun remains unmoved while the Earth orbits the Sun. In Archimedes's own words: His [Aristarchus'] hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.[5] The reason for the large size of this model is that the Greeks were unable to observe stellar parallax with available techniques, which implies that any parallax is extremely small and so the stars must be placed at great distances from the Earth (assuming heliocentrism to be true). According to Archimedes, Aristarchus did not state how far the stars were from the Earth. Archimedes therefore had to make the following assumptions: • The Universe was spherical • The ratio of the diameter of the Universe to the diameter of the orbit of the Earth around the Sun equalled the ratio of the diameter of the orbit of the Earth around the Sun to the diameter of the Earth. This assumption can also be expressed by saying that the stellar parallax caused by the motion of the Earth around its orbit equals the solar parallax caused by motion around the Earth. Put in a ratio: ${\frac {\text{Diameter of Universe}}{\text{Diameter of Earth orbit around the Sun}}}={\frac {\text{Diameter of Earth orbit around the Sun}}{\text{ Diameter of Earth}}}$ In order to obtain an upper bound, Archimedes made the following assumptions of their dimensions: • that the perimeter of the Earth was no bigger than 300 myriad stadia (5.55·105 km). • that the Moon was no larger than the Earth, and that the Sun was no more than thirty times larger than the Moon. • that the angular diameter of the Sun, as seen from the Earth, was greater than 1/200 of a right angle (π/400 radians = 0.45° degrees). Archimedes then concluded that the diameter of the Universe was no more than 1014 stadia (in modern units, about 2 light years), and that it would require no more than 1063 grains of sand to fill it. With these measurements, each grain of sand in Archimedes's thought-experiment would have been approximately 19 μm (0.019 mm) in diameter. Calculation of the number of grains of sand in the Aristarchian Universe Archimedes claims that forty poppy-seeds laid side by side would equal one Greek dactyl (finger-width) which was approximately 19 mm (3/4 inch) in length. Since volume proceeds as the cube of a linear dimension ("For it has been proved that spheres have the triplicate ratio to one another of their diameters") then a sphere one dactyl in diameter would contain (using our current number system) 403, or 64,000 poppy seeds. He then claimed (without evidence) that each poppy seed could contain a myriad (10,000) grains of sand. Multiplying the two figures together he proposed 640,000,000 as the number of hypothetical grains of sand in a sphere one dactyl in diameter. To make further calculations easier, he rounded up 640 million to one billion, noting only that the first number is smaller than the second, and that therefore the number of grains of sand calculated subsequently will exceed the actual number of grains. Recall that Archimedes's meta-goal with this essay was to show how to calculate with what were previously considered impossibly large numbers, not simply to accurately calculate the number of grains of sand in the universe. A Greek stadium had a length of 600 Greek feet, and each foot was 16 dactyls long, so there were 9,600 dactyls in a stadium. Archimedes rounded this number up to 10,000 (a myriad) to make calculations easier, again, noting that the resulting number will exceed the actual number of grains of sand. The cube of 10,000 is a trillion (1012); and multiplying a billion (the number of grains of sand in a dactyl-sphere) by a trillion (number of dactyl-spheres in a stadium-sphere) yields 1021, the number of grains of sand in a stadium-sphere. Archimedes had estimated that the Aristarchian Universe was 1014 stadia in diameter, so there would accordingly be (1014)3 stadium-spheres in the universe, or 1042. Multiplying 1021 by 1042 yields 1063, the number of grains of sand in the Aristarchian Universe.[6] Following Archimedes's estimate of a myriad (10,000) grains of sand in a poppy seed; 64,000 poppy seeds in a dactyl-sphere; the length of a stadium as 10,000 dactyls; and accepting 19mm as the width of a dactyl, the diameter of Archimedes's typical sand grain would be 18.3 μm, which today we would call a grain of silt. Currently, the smallest grain of sand would be defined as 50 μm in diameter. Additional calculations Archimedes made some interesting experiments and computations along the way. One experiment was to estimate the angular size of the Sun, as seen from the Earth. Archimedes's method is especially interesting as it takes into account the finite size of the eye's pupil,[7] and therefore may be the first known example of experimentation in psychophysics, the branch of psychology dealing with the mechanics of human perception, whose development is generally attributed to Hermann von Helmholtz. Another interesting computation accounts for solar parallax and the different distances between the viewer and the Sun, whether viewed from the center of the Earth or from the surface of the Earth at sunrise. This may be the first known computation dealing with solar parallax.[1] Quote There are some, king Gelon, who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited. Again there are some who, without regarding it as infinite, yet think that no number has been named which is great enough to exceed its magnitude. And it is clear that they who hold this view, if they imagined a mass made up of sand in other respects as large as the mass of the Earth, including in it all the seas and the hollows of the Earth filled up to a height equal to that of the highest of the mountains, would be many times further still from recognizing that any number could be expressed which exceeded the multitude of the sand so taken. But I will try to show you by means of geometrical proofs, which you will be able to follow, that, of the numbers named by me and given in the work which I sent to Zeuxippus, some exceed not only the number of the mass of sand equal in magnitude to the Earth filled up in the way described, but also that of the mass equal in magnitude to the universe.[8] — Archimedis Syracusani Arenarius & Dimensio Circuli References 1. Archimedes, The Sand Reckoner 511 R U, by Ilan Vardi, accessed 28-II-2007. 2. Alan Hirshfeld (8 September 2009). Eureka Man: The Life and Legacy of Archimedes. ISBN 9780802719799. Retrieved 17 February 2016. 3. A history of analysis. H. N. Jahnke. Providence, RI: American Mathematical Society. 2003. p. 22. ISBN 0-8218-2623-9. OCLC 51607350.{{cite book}}: CS1 maint: others (link) 4. Aristarchus biography at MacTutor, accessed 26-II-2007. 5. Arenarius, I., 4–7 6. Annotated translation of The Sand Reckoner Cal State University, Los Angeles 7. Smith, William — A Dictionary of Greek and Roman Biography and Mythology (1880), p. 272 8. Newman, James R. — The World of Mathematics (2000), p. 420 Further reading • The Sand-Reckoner, by Gillian Bradshaw. Forge (2000), 348pp, ISBN 0-312-87581-9. This is a historical novel about the life and work of Archimedes. External links • Original Greek text • The Sand Reckoner (annotated) • The Sand Reckoner (Arenario) Italian annotated translation, with notes about Archimedes and Greek mathematical notation and unit of measure. Source file of the Arenarius Greek text (for LaTeX). • Archimedes, The Sand Reckoner, by Ilan Vardi; includes a literal English version of the original Greek text Ancient Greek mathematics Mathematicians (timeline) • Anaxagoras • Anthemius • Archytas • Aristaeus the Elder • Aristarchus • Aristotle • Apollonius • Archimedes • Autolycus • Bion • Bryson • Callippus • Carpus • Chrysippus • Cleomedes • Conon • Ctesibius • Democritus • Dicaearchus • Diocles • Diophantus • Dinostratus • Dionysodorus • Domninus • Eratosthenes • Eudemus • Euclid • Eudoxus • Eutocius • Geminus • Heliodorus • Heron • Hipparchus • Hippasus • Hippias • Hippocrates • Hypatia • Hypsicles • Isidore of Miletus • Leon • Marinus • Menaechmus • Menelaus • Metrodorus • Nicomachus • Nicomedes • Nicoteles • Oenopides • Pappus • Perseus • Philolaus • Philon • Philonides • Plato • Porphyry • Posidonius • Proclus • Ptolemy • Pythagoras • Serenus • Simplicius • Sosigenes • Sporus • Thales • Theaetetus • Theano • Theodorus • Theodosius • Theon of Alexandria • Theon of Smyrna • Thymaridas • Xenocrates • Zeno of Elea • Zeno of Sidon • Zenodorus Treatises • Almagest • Archimedes Palimpsest • Arithmetica • Conics (Apollonius) • Catoptrics • Data (Euclid) • Elements (Euclid) • Measurement of a Circle • On Conoids and Spheroids • On the Sizes and Distances (Aristarchus) • On Sizes and Distances (Hipparchus) • On the Moving Sphere (Autolycus) • Optics (Euclid) • On Spirals • On the Sphere and Cylinder • Ostomachion • Planisphaerium • Sphaerics • The Quadrature of the Parabola • The Sand Reckoner Problems • Constructible numbers • Angle trisection • Doubling the cube • Squaring the circle • Problem of Apollonius Concepts and definitions • Angle • Central • Inscribed • Axiomatic system • Axiom • Chord • Circles of Apollonius • Apollonian circles • Apollonian gasket • Circumscribed circle • Commensurability • Diophantine equation • Doctrine of proportionality • Euclidean geometry • Golden ratio • Greek numerals • Incircle and excircles of a triangle • Method of exhaustion • Parallel postulate • Platonic solid • Lune of Hippocrates • Quadratrix of Hippias • Regular polygon • Straightedge and compass construction • Triangle center Results In Elements • Angle bisector theorem • Exterior angle theorem • Euclidean algorithm • Euclid's theorem • Geometric mean theorem • Greek geometric algebra • Hinge theorem • Inscribed angle theorem • Intercept theorem • Intersecting chords theorem • Intersecting secants theorem • Law of cosines • Pons asinorum • Pythagorean theorem • Tangent-secant theorem • Thales's theorem • Theorem of the gnomon Apollonius • Apollonius's theorem Other • Aristarchus's inequality • Crossbar theorem • Heron's formula • Irrational numbers • Law of sines • Menelaus's theorem • Pappus's area theorem • Problem II.8 of Arithmetica • Ptolemy's inequality • Ptolemy's table of chords • Ptolemy's theorem • Spiral of Theodorus Centers • Cyrene • Mouseion of Alexandria • Platonic Academy Related • Ancient Greek astronomy • Attic numerals • Greek numerals • Latin translations of the 12th century • Non-Euclidean geometry • Philosophy of mathematics • Neusis construction History of • A History of Greek Mathematics • by Thomas Heath • algebra • timeline • arithmetic • timeline • calculus • timeline • geometry • timeline • logic • timeline • mathematics • timeline • numbers • prehistoric counting • numeral systems • list Other cultures • Arabian/Islamic • Babylonian • Chinese • Egyptian • Incan • Indian • Japanese  Ancient Greece portal •  Mathematics portal Archimedes Written works • Measurement of a Circle • The Sand Reckoner • On the Equilibrium of Planes • Quadrature of the Parabola • On the Sphere and Cylinder • On Spirals • On Conoids and Spheroids • On Floating Bodies • Ostomachion • The Method of Mechanical Theorems • Book of Lemmas (apocryphal) Discoveries and inventions • Archimedean solid • Archimedes's cattle problem • Archimedes' principle • Archimedes's screw • Claw of Archimedes Miscellaneous • Archimedes' heat ray • Archimedes Palimpsest • List of things named after Archimedes • Pseudo-Archimedes Related people • Euclid • Eudoxus of Cnidus • Apollonius of Perga • Hero of Alexandria • Eutocius of Ascalon • Category
Wikipedia
Sandi Klavžar Sandi Klavžar (born 5 February 1962) is a Slovenian mathematician working in the area of graph theory and its applications. He is a professor of mathematics at the University of Ljubljana. Education Klavžar received his Ph.D. from the University of Ljubljana in 1990, under the supervision of Wilfried Imrich and Tomaž Pisanski.[1] Research Klavžar's research concerns graph products, metric graph theory, chemical graph theory, graph domination, and the Tower of Hanoi. Together with Wilfried Imrich and Richard Hammack, he is the author of the book Handbook of Product Graphs (CRC Press, 2011). Together with Andreas M. Hinz, Uroš Milutinović, and Ciril Petr, he is the author of the book The Tower of Hanoi – Myths and Maths (Springer, Basel, 2013). Awards and honors In 2007, Klavžar received the Zois award for exceptional contributions to science and mathematics. References 1. Sandi Klavžar at the Mathematics Genealogy Project External links • Home page at the University of Ljubljana Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Sandra Di Rocco Sandra Di Rocco (born 1967)[1] is an Italian mathematician specializing in algebraic geometry. She works in Sweden as a professor of mathematics and dean of the faculty of engineering science at KTH Royal Institute of Technology,[2] and chairs the Activity Group on Algebraic Geometry of the Society for Industrial and Applied Mathematics.[3] Education Di Rocco earned a laurea from the University of L'Aquila in 1992,[4] and completed her Ph.D. in mathematics in 1996 at University of Notre Dame in the US, supervised by Andrew J. Sommese.[5] Career After postdoctoral research at the Mittag-Leffler Institute in Sweden and the Max Planck Institute for Mathematics in Germany, and short stints as an assistant professor at Yale University and the University of Minnesota, Di Rocco became an associate professor at KTH in 2003. She was named full professor in 2010, served as department chair from 2012 to 2019, and became dean in 2020.[4] Service Di Rocco was elected as chair of the Activity Group on Algebraic Geometry (SIAG-AG) of the Society for Industrial and Applied Mathematics (SIAM) in 2020.[3] References 1. Birth year from Library of Congress catalog entry, retrieved 2021-01-11 2. "Sandra Di Rocco", Profiles, KTH Royal Institute of Technology, retrieved 2021-01-11 3. "SIAM Activity Groups Election Results", SIAM News, Society for Industrial and Applied Mathematics, 6 January 2020, retrieved 2021-01-11 4. Curriculum vitae (PDF), 2020, retrieved 2021-01-11 5. Sandra Di Rocco at the Mathematics Genealogy Project External links • Home page • Sandra Di Rocco publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID Other • IdRef
Wikipedia
Sandra Mitchell Hedetniemi Sandra (Sandee) Mitchell Hedetniemi (born July 5, 1949, née Sandra Lee Mitchell) is an American mathematician and computer scientist, known for her research in graph theory and algorithms on graphs. She is a professor of computer science at Clemson University.[1] Education and career Hedetniemi majored in applied mathematics at Centre College in Kentucky, graduating in 1971.[1] She completed a Ph.D. in computer science in 1977 at the University of Virginia under the supervision of Stephen T. Hedetniemi. Her dissertation was Algorithms on Trees and Maximal Outerplanar Graphs: Design, Complexity Analysis, and Data Structures Study.[2] She joined the University of Louisville faculty as an instructor in applied mathematics and computer science 1973, and became an assistant professor there in 1975. She moved to the department of computer and information science at the University of Oregon in 1978, and was given tenure there in 1981. In 1982 she moved again to Clemson University, taking a half-time position as an associate professor of computer science, and she was promoted to full professor in 1994.[1] Personal life Hedetniemi is originally from Louisville, Kentucky;[1] her father, Wilber A. Mitchell, was a US Navy veteran, psychiatrist, and hospital administrator.[3] She married Stephen T. Hedetniemi, her former advisor, in 1979, when both were faculty members at the University of Oregon.[4][5] References 1. Curriculum vitae: Sandee Hedetniemi, retrieved 2020-08-15 2. Sandra Mitchell Hedetniemi at the Mathematics Genealogy Project 3. "Wilber A. Mitchell", Louisville Courier-Journal, p. 14, February 6, 2001 – via Newspapers.com 4. "Marriage licenses", Louisville Courier-Journal, p. 125, September 16, 1979 – via Newspapers.com 5. Curriculum vitae: Stephen Hedetniemi, retrieved 2020-08-15 External links • Home page • Sandra Mitchell Hedetniemi publications indexed by Google Scholar Authority control: Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project
Wikipedia
Sandrine Péché Sandrine Péché (born 1977)[1] is a French mathematician who works as a professor in the Laboratoire de Probabilités, Statistique et Modélisation of Paris Diderot University.[2] Her research concerns probability theory, mathematical physics, and the theory and applications of random matrices. After studying at the École normale supérieure de Cachan,[1] Péché earned a Ph.D. from the École Polytechnique Fédérale de Lausanne in Switzerland, in 2002, under the supervision of Gérard Ben Arous.[3] She taught at the University of Grenoble before moving to Paris Diderot in 2011.[1] She served as the editor-in-chief of Electronic Communications in Probability from 2015 to 2017.[4] She was an invited speaker at the International Congress of Mathematicians in 2014.[5] References 1. Speaker biography, 38th Conference on Stochastic Processes and their Applications, University of Oxford, retrieved 2016-07-02. 2. Faculty profile, LPSM, retrieved 2021-04-22. 3. Sandrine Péché at the Mathematics Genealogy Project 4. Electronic Communications in Probability home page, retrieved 2021-04-21. 5. ICM Plenary and Invited Speakers since 1897, International Mathematical Union, retrieved 2016-07-02. Authority control International • VIAF Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Squeeze theorem In calculus, the squeeze theorem (also known as the sandwich theorem, among other names[lower-alpha 1]) is a theorem regarding the limit of a function that is trapped between two other functions. "Sandwich theorem" redirects here. For the result in measure theory, see Ham sandwich theorem. The squeeze theorem is used in calculus and mathematical analysis, typically to confirm the limit of a function via comparison with two other functions whose limits are known. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute π, and was formulated in modern terms by Carl Friedrich Gauss. Statement The squeeze theorem is formally stated as follows.[1] Theorem —  Let I be an interval containing the point a. Let g, f, and h be functions defined on I, except possibly at a itself. Suppose that for every x in I not equal to a, we have $g(x)\leq f(x)\leq h(x)$ and also suppose that $\lim _{x\to a}g(x)=\lim _{x\to a}h(x)=L.$ Then $\lim _{x\to a}f(x)=L.$ • The functions g and h are said to be lower and upper bounds (respectively) of f. • Here, a is not required to lie in the interior of I. Indeed, if a is an endpoint of I, then the above limits are left- or right-hand limits. • A similar statement holds for infinite intervals: for example, if I = (0, ∞), then the conclusion holds, taking the limits as x → ∞. This theorem is also valid for sequences. Let (an), (cn) be two sequences converging to ℓ, and (bn) a sequence. If $\forall n\geq N,N\in \mathbb {N} $ we have an ≤ bn ≤ cn, then (bn) also converges to ℓ. Proof According to the above hypotheses we have, taking the limit inferior and superior: $L=\lim _{x\to a}g(x)\leq \liminf _{x\to a}f(x)\leq \limsup _{x\to a}f(x)\leq \lim _{x\to a}h(x)=L,$ so all the inequalities are indeed equalities, and the thesis immediately follows. A direct proof, using the (ε, δ)-definition of limit, would be to prove that for all real ε > 0 there exists a real δ > 0 such that for all x with $|x-a|<\delta ,$ we have $|f(x)-L|<\varepsilon .$ Symbolically, $\forall \varepsilon >0,\exists \delta >0:\forall x,(|x-a|<\delta \ \Rightarrow |f(x)-L|<\varepsilon ).$ As $\lim _{x\to a}g(x)=L$ means that $\forall \varepsilon >0,\exists \ \delta _{1}>0:\forall x\ (|x-a|<\delta _{1}\ \Rightarrow \ |g(x)-L|<\varepsilon ).$ (1) and $\lim _{x\to a}h(x)=L$ means that $\forall \varepsilon >0,\exists \ \delta _{2}>0:\forall x\ (|x-a|<\delta _{2}\ \Rightarrow \ |h(x)-L|<\varepsilon ),$ (2) then we have $g(x)\leq f(x)\leq h(x)$ $g(x)-L\leq f(x)-L\leq h(x)-L$ We can choose $\delta :=\min \left\{\delta _{1},\delta _{2}\right\}$ :=\min \left\{\delta _{1},\delta _{2}\right\}} . Then, if $|x-a|<\delta $, combining (1) and (2), we have $-\varepsilon <g(x)-L\leq f(x)-L\leq h(x)-L\ <\varepsilon ,$ $-\varepsilon <f(x)-L<\varepsilon ,$ which completes the proof. Q.E.D The proof for sequences is very similar, using the $\varepsilon $-definition of the limit of a sequence. Examples First example The limit $\lim _{x\to 0}x^{2}\sin \left({\tfrac {1}{x}}\right)$ cannot be determined through the limit law $\lim _{x\to a}(f(x)\cdot g(x))=\lim _{x\to a}f(x)\cdot \lim _{x\to a}g(x),$ because $\lim _{x\to 0}\sin \left({\tfrac {1}{x}}\right)$ does not exist. However, by the definition of the sine function, $-1\leq \sin \left({\tfrac {1}{x}}\right)\leq 1.$ It follows that $-x^{2}\leq x^{2}\sin \left({\tfrac {1}{x}}\right)\leq x^{2}$ Since $\lim _{x\to 0}-x^{2}=\lim _{x\to 0}x^{2}=0$, by the squeeze theorem, $\lim _{x\to 0}x^{2}\sin \left({\tfrac {1}{x}}\right)$ must also be 0. Second example Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities ${\begin{aligned}&\lim _{x\to 0}{\frac {\sin x}{x}}=1,\\[10pt]&\lim _{x\to 0}{\frac {1-\cos x}{x}}=0.\end{aligned}}$ The first limit follows by means of the squeeze theorem from the fact that[2] $\cos x\leq {\frac {\sin x}{x}}\leq 1$ for x close enough to 0. The correctness of which for positive x can be seen by simple geometric reasoning (see drawing) that can be extended to negative x as well. The second limit follows from the squeeze theorem and the fact that $0\leq {\frac {1-\cos x}{x}}\leq x$ for x close enough to 0. This can be derived by replacing sin x in the earlier fact by $ {\sqrt {1-\cos ^{2}x}}$ and squaring the resulting inequality. These two limits are used in proofs of the fact that the derivative of the sine function is the cosine function. That fact is relied on in other proofs of derivatives of trigonometric functions. Third example It is possible to show that ${\frac {d}{d\theta }}\tan \theta =\sec ^{2}\theta $ by squeezing, as follows. In the illustration at right, the area of the smaller of the two shaded sectors of the circle is ${\frac {\sec ^{2}\theta \,\Delta \theta }{2}},$ since the radius is sec θ and the arc on the unit circle has length Δθ. Similarly, the area of the larger of the two shaded sectors is ${\frac {\sec ^{2}(\theta +\Delta \theta )\,\Delta \theta }{2}}.$ What is squeezed between them is the triangle whose base is the vertical segment whose endpoints are the two dots. The length of the base of the triangle is tan(θ + Δθ) − tan θ, and the height is 1. The area of the triangle is therefore ${\frac {\tan(\theta +\Delta \theta )-\tan \theta }{2}}.$ From the inequalities ${\frac {\sec ^{2}\theta \,\Delta \theta }{2}}\leq {\frac {\tan(\theta +\Delta \theta )-\tan \theta }{2}}\leq {\frac {\sec ^{2}(\theta +\Delta \theta )\,\Delta \theta }{2}}$ we deduce that $\sec ^{2}\theta \leq {\frac {\tan(\theta +\Delta \theta )-\tan \theta }{\Delta \theta }}\leq \sec ^{2}(\theta +\Delta \theta ),$ provided Δθ > 0, and the inequalities are reversed if Δθ < 0. Since the first and third expressions approach sec2θ as Δθ → 0, and the middle expression approaches ${\tfrac {d}{d\theta }}\tan \theta ,$ the desired result follows. Fourth example The squeeze theorem can still be used in multivariable calculus but the lower (and upper functions) must be below (and above) the target function not just along a path but around the entire neighborhood of the point of interest and it only works if the function really does have a limit there. It can, therefore, be used to prove that a function has a limit at a point, but it can never be used to prove that a function does not have a limit at a point.[3] $\lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}$ cannot be found by taking any number of limits along paths that pass through the point, but since ${\begin{array}{rccccc}&0&\leq &\displaystyle {\frac {x^{2}}{x^{2}+y^{2}}}&\leq &1\\[4pt]-|y|\leq y\leq |y|\implies &-|y|&\leq &\displaystyle {\frac {x^{2}y}{x^{2}+y^{2}}}&\leq &|y|\\[4pt]{\lim _{(x,y)\to (0,0)}-|y|=0} \atop \lim _{(x,y)\to (0,0)}\ \ \ |y|=0}}\implies &0&\leq &\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}&\leq &0\end{array}}$ therefore, by the squeeze theorem, $\lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}=0.$ References Notes 1. Also known as the pinching theorem, the sandwich rule, the police theorem, the between theorem and sometimes the squeeze lemma. In Italy, the theorem is also known as the theorem of carabinieri. References 1. Sohrab, Houshang H. (2003). Basic Real Analysis (2nd ed.). Birkhäuser. p. 104. ISBN 978-1-4939-1840-9. 2. Selim G. Krejn, V.N. Uschakowa: Vorstufe zur höheren Mathematik. Springer, 2013, ISBN 9783322986283, pp. 80-81 (German). See also Sal Khan: Proof: limit of (sin x)/x at x=0 (video, Khan Academy) 3. Stewart, James (2008). "Chapter 15.2 Limits and Continuity". Multivariable Calculus (6th ed.). pp. 909–910. ISBN 978-0495011637. External links • Weisstein, Eric W. "Squeezing Theorem". MathWorld. • Squeeze Theorem by Bruce Atwood (Beloit College) after work by, Selwyn Hollis (Armstrong Atlantic State University), the Wolfram Demonstrations Project. • Squeeze Theorem on ProofWiki.
Wikipedia
Sandy Green (mathematician) James Alexander "Sandy" Green FRS (26 February 1926 – 7 April 2014) was a mathematician and Professor at the Mathematics Institute at the University of Warwick, who worked in the field of representation theory. Sandy Green Born(1926-02-26)26 February 1926 Rochester, New York, US Died7 April 2014(2014-04-07) (aged 88) CitizenshipBritish, American Alma materUniversity of St Andrews St John's College, Cambridge Known forWork on group representation theory, Green's relations AwardsSenior Berwick Prize (1984) de Morgan Medal (2001) Scientific career FieldsMathematics InstitutionsBletchley Park University of Manchester University of Sussex University of Warwick Doctoral advisorPhilip Hall, David Rees Doctoral studentsPerdita Stevens Early life Sandy Green was born in February 1926 in Rochester, New York, but moved to Toronto with his emigrant Scottish parents later that year. The family returned to Britain in May 1935 when his father, Frederick C. Green, took up the Drapers Professorship of French at the University of Cambridge. Education Green was educated at the Perse School, Cambridge. He won a scholarship to the University of St Andrews and matriculated aged 16 in 1942. He took an ordinary BSc in 1944, and then, after scientific service in the war, was awarded a BSc Honours in 1947. He gained his PhD at St John's College, Cambridge in 1951, under the supervision of Philip Hall and David Rees.[1][2][3] Career World War II In the summer of 1944, he was conscripted for national scientific service at the age of eighteen, and was he was assigned to work at Bletchley Park, where he acted as a human "computer" carrying out calculations in Hut F, the "Newmanry", a department led by Max Newman, which used special-purpose Colossus computers to assist in breaking German naval codes.[4] Academic work His first lecturing post (1950) was at the University of Manchester, where Newman was his Head of department. In 1964 he became a Reader at the University of Sussex, and then in 1965 was appointed as a professor at the newly formed Mathematics Institute at Warwick University, where he led the algebra group. He spent several periods as a visiting academic in the United States, beginning with a year at the Institute for Advanced Study in Princeton, New Jersey in 1960–61, as well as similar visits to universities in France, Germany and Portugal. After retiring from Warwick he became a member of the faculty and Professor Emeritus at the Mathematics Institute of the University of Oxford, in whose meetings he participated actively. His final publication was produced at the age of eighty. Work in mathematics Green found all the characters of general linear groups over finite fields (Green 1955) and invented the Green correspondence in modular representation theory. Both Green functions in the representation theory of groups of Lie type and Green's relations in the area of semigroups are named after him. His final publication (2007) was a revised and augmented edition of his 1980 work, Polynomial Representations of GL(n). Personal life Green met his wife, Margaret Lord, at Bletchley Park, where she worked as a Colossus operator, also in the Newmanry section (Hut F). The couple married in August 1950, and have two daughters and a son. Up to his death, he lived in Oxford. Honours He was elected to the Royal Society of Edinburgh in 1968 and the Royal Society in 1987[5] and was awarded two London Mathematical Society prizes: the Senior Berwick Prize in 1984[6] and the de Morgan Medal in 2001.[5][7][8] Bibliography • (1955) The characters of the finite general linear group, Trans. A. M. S. 80 402–447. • (2007) Polynomial Representations of GL_n, Lecture Notes in Mathematics, Springer, Vol. 830. 2nd edition with an Appendix on Schensted Correspondence and Littelmann Paths, K. Erdmann, J. A. Green and M. Shocker References 1. J. A. Green (1951) Abstract Algebra and Semigroups, PhD thesis, University of Cambridge 2. Green, James A. (1951). "On the structure of semigroups". Ann. Math. 2. 54 (1): 163–172. doi:10.2307/1969317. hdl:10338.dmlcz/100067. JSTOR 1969317. Zbl 0043.25601. 3. Green, J. A.; Roseblade, J. E.; Thompson, John G. (1984), "Obituary: Philip Hall", The Bulletin of the London Mathematical Society, 16 (6): 603–626, doi:10.1112/blms/16.6.603, ISSN 0024-6093, MR 0758133 4. O'Connor, John J.; Robertson, Edmund F., "James Alexander Green", MacTutor History of Mathematics archive, University of St Andrews 5. "Obituaries: James Alexander (Sandy) Green". London Mathematical Society. Archived from the original on 28 December 2012. Retrieved 5 July 2014. 6. "Berwick prizes". The MacTutor History of Mathematics archive. 7. "Citation for James Alexander Green". London Mathematical Society. Retrieved 5 July 2014. 8. Donkin, Stephen; Erdmann, Karin (30 December 2019). "James Alexander Green. 26 February 1926 – 7 April 2014". Biographical Memoirs of Fellows of the Royal Society. 67: 173–190. doi:10.1098/rsbm.2019.0012. External links • Sandy Green at the Mathematics Genealogy Project • O'Connor, John J.; Robertson, Edmund F., "Sandy Green (mathematician)", MacTutor History of Mathematics Archive, University of St Andrews • Warwick page Profile at Warwick University De Morgan Medallists • Arthur Cayley (1884) • James Joseph Sylvester (1887) • Lord Rayleigh (1890) • Felix Klein (1893) • S. Roberts (1896) • William Burnside (1899) • A. G. Greenhill (1902) • H. F. Baker (1905) • J. W. L. Glaisher (1908) • Horace Lamb (1911) • J. Larmor (1914) • W. H. Young (1917) • E. W. Hobson (1920) • P. A. MacMahon (1923) • A. E. H. Love (1926) • Godfrey Harold Hardy (1929) • Bertrand Russell (1932) • E. T. Whittaker (1935) • J. E. Littlewood (1938) • Louis Mordell (1941) • Sydney Chapman (1944) • George Neville Watson (1947) • A. S. Besicovitch (1950) • E. C. Titchmarsh (1953) • G. I. Taylor (1956) • W. V. D. Hodge (1959) • Max Newman (1962) • Philip Hall (1965) • Mary Cartwright (1968) • Kurt Mahler (1971) • Graham Higman (1974) • C. Ambrose Rogers (1977) • Michael Atiyah (1980) • K. F. Roth (1983) • J. W. S. Cassels (1986) • D. G. Kendall (1989) • Albrecht Fröhlich (1992) • W. K. Hayman (1995) • R. A. Rankin (1998) • J. A. Green (2001) • Roger Penrose (2004) • Bryan John Birch (2007) • Keith William Morton (2010) • John Griggs Thompson (2013) • Timothy Gowers (2016) • Andrew Wiles (2019) Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Sanford L. Segal Sanford Leonard Segal ((1937-10-11)October 11, 1937 – (2010-05-07)May 7, 2010) was a mathematician and historian of science and mathematics at the University of Rochester. Mathematically he specialized in analytic number theory, and complex analysis. He wrote the textbook Nine Introductions in Complex Analysis (1981), and the tome Mathematicians Under the Nazis (2003), a historical recount from that period. He also taught courses in women's studies, and nuclear arms. He was on the Committee of Actuarial Studies at the University of Rochester.[1] Sanford L. Segal Born(1937-10-11)October 11, 1937 DiedMay 7, 2010(2010-05-07) (aged 72) NationalityAmerican Alma materUniversity of Colorado, Wesleyan University Known forAnalytic number theory, Complex analysis Scientific career FieldsMathematics ThesisThe Error Term in the Formula for the Average Value of the Euler's φ Function (1963) Doctoral advisorSarvadaman Chowla Notable studentsMelvyn B. Nathanson Life In 1937 he was born into a conservative Jewish family.[2] In 1958, he received his B.A. degree from Wesleyan University with Honors in Mathematics and High Honors in Classical Civilization studies.[3] In 1959 he spent a year as a Fulbright student in Mainz, Germany. In 1963, he earned his Ph.D. in Mathematics at University of Colorado under the supervision of Sarvadaman D. S. Chowla with the dissertation entitled The Error Term in the Formula for the Average Value of the Euler Phi Function.[4][5] Career After his Ph.D., he worked at the University of Rochester for 44 years until his retirement in 2008. In 1965 he received a grant from the Fulbright Program as a research fellow in Vienna, Austria. And in 1977, he received a grant from the National Institute for Pure and Applied Mathematics to teach in Rio de Janeiro.[3][6] In 1981 he published Nine Introductions in Complex Analysis with North Holland Press (a revised edition was published in 2011 by Elsevier, which had taken over North Holland Press). He later received a grant from The Alexander von Humboldt Foundation to research history of science in Nazi Germany. Princeton University Press later published the book Mathematicians Under the Nazis in 2003,[3] which addresses the experience of mathematics academics in Nazi Germany. The book involved a lot of direct research and interviews with survivors and translations from German. He translated from French the book History of Mathematics: Highways and Byways in 2009. In addition, Segal published more than 45 papers on mathematics, mathematics education, and the history of science. He was a member of the Religious Society of Friends. He was also a member of Sigma Xi and of Phi Beta Kappa. He married Rima Maxwell and had three children, Adam, Joshua, and Zoë. He died on May 7, 2010.[7] Academic publications • Sanford L. Segal (1962). "On π(x + y) ≦ π(x) + π(y)". Transactions of the American Mathematical Society. 104 (3): 523–527. doi:10.2307/1993801. JSTOR 1993801. • Sanford L. Segal (1962). "On π(x + y) ≦ π(x) + π(y)". Transactions of the American Mathematical Society. 104 (3): 523. doi:10.1090/S0002-9947-1962-0139586-4. • S. L. Segal (1964). "A Note on Normal Order and the Euler fe Function". Journal of the London Mathematical Society. s1-39 (1): 400–404. doi:10.1112/jlms/s1-39.1.400. • Sanford L. Segal (1965). "A note on the average order of number-theoretic error terms". Duke Mathematical Journal. 32 (1965): 279–284. doi:10.1215/S0012-7094-65-03227-8. • S. L. Segal (1965). "On Non-Decreasing Normal Orders". Journal of the London Mathematical Society. s1-40 (1): 459–466. doi:10.1112/jlms/s1-40.1.459. • Sanford L. Segal (1965). "Errata: "A note on the average order of number-theoretic error terms,"vol. 32 (1965) pp. 279–284". Duke Mathematical Journal. 32 (1965): 765. doi:10.1215/S0012-7094-65-03280-1. • Segal, S.L. (1981). Nine Introductions in Complex Analysis. Notas de matematica fisica. Elsevier Science. ISBN 9780444862266. LCCN lc81009568. • S. L. Segal (1987). "Is Female Math Anxiety Real?". Science. 237 (4813): 350. Bibcode:1987Sci...237..350S. doi:10.1126/science.237.4813.350-a. PMID 3603021. S2CID 2369790. • Sanford L. Segal (1992). "Ernst August Weiss: Mathematical Pedagogical Innovation in the Third Reich". In Sergei S. Demidov; Menso Folkerts; David E. Rowe; et al. (eds.). Amphora — Festsschrift for Hans Wussing on the Occasion of his 65th Birthday. Basel: Springer. pp. 693–704. doi:10.1007/978-3-0348-8599-7. ISBN 978-3-0348-9696-2. • Andreas Steup; Sanford L. Segal (1996). "German Universities". Science. 274 (5289): 905b. Bibcode:1996Sci...274..905S. doi:10.1126/science.274.5289.905b. PMID 17798609. • S. L. Segal (1996). "German Universities". Science. 274 (5289): 901d–905. doi:10.1126/science.274.5289.901d. PMID 17798609. S2CID 220099378. • Segal, S.L. (2003). Mathematicians Under the Nazis. Princeton University Press. ISBN 9780691004518. LCCN 02070399. References 1. "OFFICIAL BULLETIN UNDERGRADUATE STUDIES" (PDF). University of Rochester. 2. Segal, Sanford. "Why I Am Not a Christian". Friends Journal. 3. "Sanford Segal retires". University of Rochester. Archived from the original on 2010-05-14. 4. Allyn Jackson (2010). "Sanford Segal (1937–2010)". Notices of the AMS. American Mathematical Society. 59 (9): 1278. 5. Sanford L. Segal at the Mathematics Genealogy Project 6. Blank, Alan (24 May 2010). "Mathematics Professor Emeritus Sanford Segal Dies". University of Rochester. Retrieved 20 November 2012. {{cite journal}}: Cite journal requires |journal= (help) 7. "Inside the AMS" (PDF). Notices of the AMS. 57 (7): 892. 2010. Retrieved 28 November 2012. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Sanity check A sanity check or sanity test is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material's creator was thinking rationally, applying sanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. A rule-of-thumb or back-of-the-envelope calculation may be checked to perform the test. The advantage of performing an initial sanity test is that of speedily evaluating basic function. In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of the result is divisible by 9 is a sanity test—it will not catch every multiplication error, however it's a quick and simple method to discover many possible errors. In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing. Use in different fields Mathematical A sanity test can refer to various orders of magnitude and other simple rule-of-thumb devices applied to cross-check mathematical calculations. For example: • If one were to attempt to square 738 and calculated 54,464, a quick sanity check could show that this result cannot be true. Consider that 700 < 738, yet 7002 = 72 × 1002 = 490,000 > 54,464. Since squaring positive integers preserves their inequality, the result cannot be true, and so the calculated result is incorrect. The correct answer, 7382 = 544,644, is more than 10 times higher than 54,464. • In multiplication, 918 × 155 is not 142,135 since 918 is divisible by three but 142,135 is not (digits add up to 16, not a multiple of three). Also, the product must end in the same digit as the product of end-digits: 8 × 5 = 40, but 142,135 does not end in "0" like "40", while the correct answer does: 918 × 155 = 142,290. An even quicker check is that the product of even and odd numbers is even, whereas 142,135 is odd. Physical • The power output of a car cannot be 700 kJ, since the unit joules is a measure of energy, not power (energy per unit time). This is a basic application of dimensional analysis. • When determining physical properties, comparing to known or similar substances will often yield insight on whether the result is reasonable. For instance, most metals sink in water, so the density of most metals should be greater than that of water (~1000 kg/m3). • Fermi estimates will often provide insight on the order of magnitude of an expected value. Software development In software development, a sanity test (a form of software testing which offers "quick, broad, and shallow testing"[1]) evaluates the result of a subset of application functionality to determine whether it is possible and reasonable to proceed with further testing of the entire application.[2] Sanity tests may sometimes be used interchangeably with smoke tests[3] insofar as both terms denote tests which determine whether it is possible and reasonable to continue testing further. On the other hand, a distinction is sometimes made that a smoke test is a non-exhaustive test that ascertains whether the most crucial functions of a programme work before proceeding with further testing whereas a sanity test refers to whether specific functionality such as a particular bug fix works as expected without testing the wider functionality of the software. In other words, a sanity test determines whether the intended result of a code change works correctly while a smoke test ensures that nothing else important was broken in the process. Sanity testing and smoke testing avoid wasting time and effort by quickly determining whether an application is too flawed to merit more rigorous QA testing, but needs more developer debugging. Groups of sanity tests are often bundled together for automated unit testing of functions, libraries, or applications prior to merging development code into a testing or trunk version control branch,[4] for automated building,[5] or for continuous integration and continuous deployment.[6] Another common usage of sanity test is to denote checks which are performed within programme code, usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more complicated the routine, the more important that its response be checked. The trivial case is checking to see whether the return value of a function indicated success or failure, and to therefore cease further processing upon failure. This return value is actually often itself the result of a sanity check. For example, if the function attempted to open, write to, and close a file, a sanity check may be used to ensure that it did not fail on any of these actions—which is a sanity check often ignored by programmers.[7] These kinds of sanity checks may be used during development for debugging purposes and also to aid in troubleshooting software runtime errors. For example, in a bank account management application, a sanity check will fail if a withdrawal requests more money than the total account balance rather than allowing the account to go negative (which wouldn't be sane). Another sanity test might be that deposits or purchases correspond to patterns established by historical data—for example, large purchase transactions or ATM withdrawals in foreign locations never before visited by the cardholder may be flagged for confirmation. Sanity checks are also performed upon installation of stable, production software code into a new computing environment to ensure that all dependencies are met, such as a compatible operating system and link libraries. When a computing environment has passed all the sanity checks, it's known as a sane environment for the installation programme to proceed with reasonable expectation of success. A "Hello, World!" program is often used as a sanity test for a development environment similarly. Rather than a complicated script running a set of unit tests, if this simple programme fails to compile or execute, it proves that the supporting environment likely has a configuration problem that will prevent any code from compiling or executing. But if "Hello world" executes, then any problems experienced with other programmes likely can be attributed to errors in that application's code rather than the environment. The Association for Computing Machinery,[8] and software projects such as Android,[9] MediaWiki[10] and Twitter,[11] discourage use of the phrase sanity check in favour of other terms such as confidence test, coherence check, or simply test, as part of a wider attempt to avoid ableist language and increase inclusivity. See also • Certifying algorithm • Checksum • Fermi problem • Mental calculation • Proof of concept References 1. Fecko, Mariusz A.; Lott, Christopher M. (October 2002). "Lessons learned from automating tests for an operations support system" (PDF). Software: Practice and Experience. 32 (15): 1485–1506. doi:10.1002/spe.491. S2CID 16820529. Archived from the original (PDF) on 17 July 2003. 2. Sammi, Rabia; Masood, Iram; Jabeen, Shunaila (2011). Zain, Jasni Mohamad; Wan Mohd, Wan Maseri bt; El-Qawasmeh, Eyas (eds.). "A Framework to Assure the Quality of Sanity Check Process". Software Engineering and Computer Systems. Communications in Computer and Information Science. Berlin, Heidelberg: Springer. 181: 143–150. doi:10.1007/978-3-642-22203-0_13. ISBN 978-3-642-22203-0. 3. ISTQB® Glossary for the International Software Testing Qualification Board® software testing qualification scheme, ISTQB Glossary International Software Testing Qualification Board 4. http://webhotel4.ruc.dk/~nielsj/research/publications/freebsd.pdf 5. Hassan, A. E. and Zhang, K. 2006. Using Decision Trees to Predict the Certification Result of a Build. In Proceedings of the 21st IEEE/ACM international Conference on Automated Software Engineering (September 18 – 22, 2006). Automated Software Engineering. IEEE Computer Society, Washington, DC, 189–198. 6. http://jitm.ubalt.edu/XXIX-2/article4.pdf 7. Darwin, Ian F. (January 1991). Checking C programs with lint (1st ed., with minor revisions. ed.). Newton, Mass.: O'Reilly & Associates. p. 19. ISBN 0-937175-30-7. Retrieved 7 October 2014. A common programming habit is to ignore the return value from fprintf(stderr, ... 8. "Words Matter". 2020-11-20. Retrieved 2023-06-29. 9. "Coding with respect". Android Open Source Project. 2022-11-16. Retrieved 2023-01-23. 10. "Inclusive language/en-gb - MediaWiki". www.mediawiki.org. Retrieved 2023-01-23. 11. "Twitter Engineering". Twitter. Retrieved 2023-01-23. Standard test items • Pangram • Reference implementation • Sanity check • Standard test image Artificial intelligence • Chinese room • Turing test Television (test card) • SMPTE color bars • EBU colour bars • Indian-head test pattern • EIA 1956 resolution chart • BBC Test Card A, B, C, D, E, F, G, H, J, W, X • ETP-1 • Philips circle pattern (PM 5538, PM 5540, PM 5544, PM 5644) • Snell & Wilcox SW2/SW4 • Telefunken FuBK • TVE test card • UEIT Computer languages • "Hello, World!" program • Quine • Trabb Pardo–Knuth algorithm • Man or boy test • Just another Perl hacker Data compression • Calgary corpus • Canterbury corpus • Silesia corpus • enwik8, enwik9 3D computer graphics • Cornell box • Stanford bunny • Stanford dragon • Utah teapot • List Machine learning • ImageNet • MNIST database • List Typography (filler text) • Etaoin shrdlu • Hamburgevons • The quick brown fox jumps over the lazy dog Other • 3DBenchy • Acid • 1 • 2 • 3 • "Bad Apple!!" • EICAR test file • functions for optimization • GTUBE • Harvard sentences • Lenna • "The North Wind and the Sun" • "Tom's Diner" • SMPTE universal leader • EURion constellation • Shakedown • Webdriver Torso • 1951 USAF resolution test chart
Wikipedia
Sanjeev Arora Sanjeev Arora (born January 1968) is an Indian American theoretical computer scientist. Sanjeev Arora Arora at Oberwolfach, 2010 BornJanuary 1968 (1968-01) (age 55) Jodhpur,[1] Rajasthan, India CitizenshipUnited States[1] Known forProbabilistically checkable proofs PCP theorem Scientific career FieldsTheoretical computer science InstitutionsPrinceton University Doctoral advisorUmesh Vazirani Life He was a visiting scholar at the Institute for Advanced Study in 2002–03.[2] In 2008 he was inducted as a Fellow of the Association for Computing Machinery.[3] In 2011 he was awarded the ACM Infosys Foundation Award (now renamed ACM Prize in Computing), given to mid-career researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems (jointly with Satish Rao and Umesh Vazirani). In 2012 he became a Simons Investigator.[4] Arora was elected in 2015 to the American Academy of Arts and Sciences and in 2018 to the National Academy of Science[5] He is a coauthor (with Boaz Barak) of the book Computational Complexity: A Modern Approach and is a founder, and on the Executive Board, of Princeton's Center for Computational Intractability.[6] He and his coauthors have argued that certain financial products are associated with computational asymmetry, which under certain conditions may lead to market instability.[7] Books • Arora, Sanjeev; Barak, Boaz (2009). Computational complexity : a modern approach. Cambridge University Press. ISBN 978-0-521-42426-4. OCLC 286431654. References 1. "Sanjeev Arora". www.cs.princeton.edu. 2. Institute for Advanced Study: A Community of Scholars Archived 2013-01-06 at the Wayback Machine 3. ACM: Fellows Award / Sanjeev Arora Archived 2011-08-23 at the Wayback Machine 4. Simons Investigators Awardees, The Simons Foundation 5. "Professor Sanjeev Arora Elected to the National Academy of Sciences - Computer Science Department at Princeton University". www.cs.princeton.edu. 6. "Video Archive". intractability.princeton.edu. 7. Arora, S, Barak, B, Brunnemeier, M 2011 "Computational Complexity and Information Asymmetry in Financial Products" Communications of the ACM, Issue 5 see FAQ Archived 2012-12-02 at the Wayback Machine External links • Sanjeev Arora's Homepage • Sanjeev Arora at the Mathematics Genealogy Project Gödel Prize laureates 1990s • Babai / Goldwasser / Micali / Moran / Rackoff (1993) • Håstad (1994) • Immerman / Szelepcsényi (1995) • Jerrum / Sinclair (1996) • Halpern / Moses (1997) • Toda (1998) • Shor (1999) 2000s • Vardi / Wolper (2000) • Arora / Feige / Goldwasser / Lund / Lovász / Motwani / Safra / Sudan / Szegedy (2001) • Sénizergues (2002) • Freund / Schapire (2003) • Herlihy / Saks / Shavit / Zaharoglou (2004) • Alon / Matias / Szegedy (2005) • Agrawal / Kayal / Saxena (2006) • Razborov / Rudich (2007) • Teng / Spielman (2008) • Reingold / Vadhan / Wigderson (2009) 2010s • Arora / Mitchell (2010) • Håstad (2011) • Koutsoupias / Papadimitriou / Roughgarden / É. Tardos / Nisan / Ronen (2012) • Boneh / Franklin / Joux (2013) • Fagin / Lotem / Naor (2014) • Spielman / Teng (2015) • Brookes / O'Hearn (2016) • Dwork / McSherry / Nissim / Smith (2017) • Regev (2018) • Dinur (2019) 2020s • Moser / G. Tardos (2020) • Bulatov / Cai / Chen / Dyer / Richerby (2021) • Brakerski / Gentry / Vaikuntanathan (2022) Authority control International • ISNI • VIAF National • Norway • Germany • Israel • United States • Czech Republic • Netherlands Academics • Association for Computing Machinery • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Sanov's theorem In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables. Let A be a set of probability distributions over an alphabet X, and let q be an arbitrary distribution over X (where q may or may not be in A). Suppose we draw n i.i.d. samples from q, represented by the vector $x^{n}=x_{1},x_{2},\ldots ,x_{n}$. Then, we have the following bound on the probability that the empirical measure ${\hat {p}}_{x^{n}}$ of the samples falls within the set A: $q^{n}({\hat {p}}_{x^{n}}\in A)\leq (n+1)^{|X|}2^{-nD_{\mathrm {KL} }(p^{*}||q)}$, where • $q^{n}$ is the joint probability distribution on $X^{n}$, and • $p^{*}$ is the information projection of q onto A. In words, the probability of drawing an atypical distribution is bounded by a function of the KL divergence from the true distribution to the atypical one; in the case that we consider a set of possible atypical distributions, there is a dominant atypical distribution, given by the information projection. Furthermore, if A is the closure of its interior, $\lim _{n\to \infty }{\frac {1}{n}}\log q^{n}({\hat {p}}_{x^{n}}\in A)=-D_{\mathrm {KL} }(p^{*}||q).$ References • Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory (2 ed.). Hoboken, New Jersey: Wiley Interscience. pp. 362. ISBN 9780471241959. • Sanov, I. N. (1957) "On the probability of large deviations of random variables". Mat. Sbornik 42(84), No. 1, 11–44. • Санов, И. Н. (1957) "О вероятности больших отклонений случайных величин". МАТЕМАТИЧЕСКИЙ СБОРНИК' 42(84), No. 1, 11–44.
Wikipedia
Santiago López de Medrano Santiago López de Medrano Sánchez (born October 15, 1942 in Mexico City) is a Mexican mathematician, who works as a researcher at the National Autonomous University of Mexico (UNAM).[1][2] His research has concerned knot theory, singularity theory, biomathematics, and differential topology.[2] López de Medrano did his undergraduate studies at UNAM,[2] and earned his Ph.D. from Princeton University in 1969, under the supervision of William Browder.[3] He returned to UNAM as a researcher and professor in 1968, and was president of the Mexican Mathematical Society from 1969 to 1973.[2] López de Medrano presented his work on knot invariants at the International Congress of Mathematicians in 1970.[2] In 2012, he became one of the inaugural fellows of the American Mathematical Society.[4] References 1. Faculty profile, UNAM, retrieved 2014-12-19. 2. Elizondo, E. Javier, "Topology and Geometry: The Mathematics of Santiago López de Medrano", SMM 60th Anniversary (in Spanish), Mexican Mathematical Society, retrieved 2014-12-19 3. Santiago Sanchez Lopez de Medrano at the Mathematics Genealogy Project 4. List of Fellows of the American Mathematical Society, retrieved 2014-12-19. Authority control International • ISNI • VIAF National • Israel • United States Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Giovanni Sante Gaspero Santini Giovanni Sante Gaspero Santini (b. Caprese in Tuscany, 30 June 1786; d. Noventa Padovana, 26 June 1877) was an Italian astronomer and mathematician.[1] Giovanni Sante Gaspero Santini Born(1786-06-30)30 June 1786 Caprese, Grand Duchy of Tuscany Died26 June 1877(1877-06-26) (aged 90) Noventa Padovana, Kingdom of Italy Alma materUniversity of Pisa Scientific career FieldsAstronomy InstitutionsObservatory of Padua He received his first instruction from his parental uncle, the Abbot Giovanni Battista Santini. After finishing his philosophical studies in the school year 1801-2, at the seminary of Prato, he entered in 1802 the University of Pisa. He very soon abandoned the study of law in order to devote himself, under the direction of Prof. Paoli and Abbot Pacchiano, exclusively to mathematics and the natural sciences. It appears that at Pisa, Santini still wore the cassock, with the consequence that in bibliographical dictionaries he still figures under the title of abate. It is certain, however, that he never received major orders. In 1810 he married Teresa Pastrovich, and one year after her death, in 1843, he contracted a second marriage with Adriana Conforti, who outlived him. During his stay in Pisa he became friendly with the rector of the university and of the influential Vittorio Fossombroni. At their urgent suggestion Santini's family, especially his uncle, made great sacrifices to enable him to continue his studies in Milan (1805–1806) under Barnaba Oriani, Cesaris, and Francesco Carlini. On 17 Oct., 1806, the Italian Government appointed him assistant to the director of the observatory at Padua, Abate Chiminello, whom he succeeded in 1814. In 1813 the university offered him the chair of astronomy, a position in which he was confirmed by the Emperor Francis I in 1818 after the Venetian territory had become part of Austria. In addition he taught for several years, as substitute, elementary algebra, geometry, and higher mathematics. During the school years 1824-1825 and 1856-1857 he was rector of the university, and from 1845 to 1872 director of mathematical studies. Towards the end of 1873 he suffered repeatedly from fainting spells which were followed by a steadily increasing physical and mental weakness and final breakdown. He died in his ninety-first year at his villa, Noventa Padovana. Astronomical work Both as a practical and theoretical astronomer, Santini made the Observatory of Padua famous. When he took charge the observatory was located in an old fortified tower, in a reportedly precarious condition, but he refurbished it. In 1811 he determined the latitude of Padua with the aid of Gauss's method of three stars in the same altitude, and in 1815 again, with a new repeating circle. In 1822, '24, and '28 he assisted the astronomical and geodetic service of Italy by making observations in longitude. Constantly striving to equip this institute in accordance with the latest requirements of science, he installed in 1823 a new Utzschneider equatorial, and in 1837 a new meridian circle. With these last he began at once to make zonal observations for a catalogue of stars between declination +10° and -10°, an undertaking which he carried out on a large scale, and which he, with the aid of his assistant, Trettenero, completed in 1857, after ten years of work. In 1843 he made a scientific journey through Germany, and meeting scientists in his own and related fields. In the Encke-Galle catalogue he is credited with the calculation of nineteen comet orbits. He acquired his greatest repute by his calculations of the orbital disturbances during the period from 1832-1852 caused by the great planets on the comet of Biela. The time and place of the appearance of this comet in 1846 corresponded exactly with previous calculations. In 1819-20 he published his Elementi di Astronomia (2nd ed., Padua, 1830), a work in two parts. In 1828 appeared his Teorica degli Stromenti Ottici, also published in Padua, in which he explains by means of simple formulas the construction of the different kinds of telescopes, microscopes etc. A number of his dissertations on geodetic and astronomic subjects appeared in the annals of learned associations, in the Correspondence du Baron de Zach, Astronomische Nachrichten, etc. Besides some twenty Italian scientific societies, Santini became a member in 1825 of the London Royal Astronomical Society; in 1845 a corresponding member of the Institut de France; and in 1847 member of the Kaiserliche Akademie der Wissenschaften of Vienna. When in 1866 Venice was separated from Austria, he became a corresponding member of the last-named association. Danish, Austrian, Spanish, and Italian decorations were bestowed upon him. A complete list of his writings may be found in the "Discorso" (pp. 42–67) by Lorenzoni, mentioned below. References 1. Scientific American, "Death of Professor Santini". Munn & Company. 1877-07-14. p. 15. Sources Attribution •  This article incorporates text from a publication now in the public domain: Herbermann, Charles, ed. (1913). "Giovanni Sante Gaspero Santini". Catholic Encyclopedia. New York: Robert Appleton Company. Cites: • LORENZONI, Giovanni Santini, la sua vita e le sue opere. Discorso letto nella chiesa di S. Sofia in Padova (Padua, 1877); idem, In occasione del primo centenario dalla nascita dell' astronomo Santini (Padua, 1887); • POGGENDORFF, Biograf. litt. Handb., II (Leipzig, 1859) Authority control International • ISNI • VIAF National • Norway • Spain • France • BnF data • Germany • Italy • United States • Netherlands People • Italian People • Deutsche Biographie Other • SNAC • IdRef
Wikipedia
Mark Sapir Mark Sapir (February 12, 1957 - October 8, 2022)[1][2] was a U.S. and Russian mathematician working in geometric group theory, semigroup theory and combinatorial algebra. He was a Centennial Professor of Mathematics in the Department of Mathematics at Vanderbilt University. Mark V. Sapir Born(1957-02-12)February 12, 1957 Died(2022-10-08)October 8, 2022 NationalityAmerican Alma materUral State University Known forresearch in geometric group theory Scientific career FieldsMathematics InstitutionsVanderbilt University Doctoral advisorLev Shevrin Biographical and professional information Sapir received his undergraduate degree in mathematics (diploma of higher education) from the Ural State University in Yekaterinburg (then called Sverdlovsk), Russia, in 1978.[1] He received his PhD in mathematics (Candidate of Sciences) degree, joint from the Ural State University and Moscow State Pedagogical Institute in 1983, with Lev Shevrin as the advisor.[1] Afterwards Sapir held faculty appointments at the Ural State University, Sverdlovsk Pedagogical Institute, University of Nebraska at Lincoln, before coming as a professor of mathematics to Vanderbilt University in 1997. He was appointed a Centennial Professor of Mathematics at Vanderbilt in 2001. Sapir gave an invited talk at the International Congress of Mathematicians in Madrid in 2006.[3] He gave an AMS Invited Address at the American Mathematical Society Sectional Meeting in Huntsville, Alabama in October 2008.[4] He gave a plenary talk at the December 2008 Winter Meeting of the Canadian Mathematical Society.[5] Sapir gave the 33d William J. Spencer Lecture at the Kansas State University in November 2008.[6] He gave the 75th KAM Mathematical Colloquium lecture at the Charles University in Prague in June 2010.[7] Sapir became a member of the inaugural class of Fellows of the American Mathematical Society in 2012.[8] Sapir founded the Journal of Combinatorial Algebra, published by the European Mathematical Society, and served as its founding editor-in-chief starting in 2016.[9] He also was an editorial board member for the journals Groups, Complexity, Cryptology and Algebra and Discrete Mathematics. His past editorial board positions include Journal of Pure and Applied Algebra, Groups, Geometry, and Dynamics, Algebra Universalis, and International Journal of Algebra and Computation (as Managing Editor). A special mathematical conference in honor of Sapir's 60th birthday took place at the University of Illinois at Urbana–Champaign in May 2017.[10] Mark Sapir's elder daughter, Jenya Sapir, is also a mathematician; she was Maryam Mirzakhani's first (out of two) students.[11] Currently, she is an assistant professor in the Department of Mathematics of Binghamton University.[12] Mark Sapir and his wife Olga Sapir became naturalized U.S. citizens in July 2003,[13] after suing the BCIS in federal court over a multi-year delay of their citizenship application originally filed in 1999.[14] Mathematical contributions Sapir's early mathematical work concerned mostly semigroup theory. In geometric group theory his most well-known and significant results were obtained in two papers published in the Annals of Mathematics in 2002,[15][16] the first joint with Jean-Camille Birget and Eliyahu Rips, and the second joint with Birget, Rips and Aleksandr Olshansky. The first paper provided an essentially complete description of all the possible growth types of Dehn functions of finitely presented groups. The second paper proved that a finitely presented group has the word problem solvable in non-deterministic polynomial time (NP) if and only if this group embeds as a subgroup of a finitely presented group with polynomial Dehn function. A combined featured review of these two papers in Mathematical Reviews characterized them as ``remarkable foundational results regarding isoperimetric functions of finitely presented groups and their connections with the complexity of the word problem".[17] Sapir was also known for his work, mostly joint with Cornelia Druţu, on developing the asymptotic cone approach to the study of relatively hyperbolic groups.[18][19] A 2002 paper of Sapir and Olshansky constructed the first known finitely presented counter-examples to the Von Neumann conjecture.[20] Sapir also introduced, in a 1993 paper with Meakin,[21] the notion of a diagram group, based on finite semigroup presentations. He further developed this notion in subsequent joint papers with Guba.[22] Diagram groups provided a new approach to the study of Thompson groups, which appear as important examples of diagram groups. Selected publications • Guba, Victor; Sapir, Mark (1997). "Diagram groups". Memoirs of the American Mathematical Society. 130 (620). doi:10.1090/memo/0620. MR 1396957. • Sapir, Mark V.; Birget, Jean-Camille; Rips, Eliyahu (2002). "Isoperimetric and isodiametric functions of groups". Annals of Mathematics. Second Series. 156 (2): 345–466. arXiv:math/9811106. doi:10.2307/3597196. JSTOR 3597196. MR 1933723. S2CID 14155715. • Birget, Jean-Camille; Ol'shanskii, Alexander Yu.; Rips, Eliyahu; Sapir, Mark V. (2002). "Isoperimetric functions of groups and computational complexity of the word problem". Annals of Mathematics. Second Series. 156 (2): 467–518. arXiv:math/9811105. doi:10.2307/3597195. JSTOR 3597195. MR 1933724. S2CID 119728458. • Olʹshanskii, Alexander Yu.; Sapir, Mark V. (2002). "Non-amenable finitely presented torsion-by-cyclic groups". Publications Mathématiques de l'IHÉS. 96 (2003): 43–169. doi:10.1007/s10240-002-0006-7. MR 1985031. S2CID 122990460. • Borisov, Alexander; Sapir, Mark (2005). "Polynomial maps over finite fields and residual finiteness of mapping tori of group endomorphisms". Inventiones Mathematicae. 160 (2): 341–356. arXiv:math/0309121. doi:10.1007/s00222-004-0411-2. MR 2138070. S2CID 6210319. • Druţu, Cornelia; Sapir, Mark (2008). "Groups acting on tree-graded spaces and splittings of relatively hyperbolic groups". Advances in Mathematics. 217 (3): 1313–1367. doi:10.1016/j.aim.2007.08.012. MR 2383901. S2CID 10461978. See also • List of International Congresses of Mathematicians Plenary and Invited Speakers References 1. Mark Sapir's CV, Department of Mathematics, Vanderbilt University. Accessed November 4, 2018 2. Mark Sapir Obituary. Accessed October 10, 2022 3. ICM Plenary and Invited Speakers, International Mathematical Union. Accessed November 4, 2018. 4. AMS Sectional Meeting Invited Addresses. 2008 Fall Southeastern Meeting Huntsville, AL, October 24-26, 2008 (Friday – Sunday) Meeting #1044. American Mathematical Society. Accessed November 4, 2018. 5. Plenary lectures, December 2008 Winter Meeting, Canadian Mathematical Society. Accessed November 4, 2018. 6. William J. Spencer Lectures, Department of Mathematics, Kansas State University. Accessed November 4, 2018. 7. KAM Mathematical Colloquia, Department of Applied Mathematics, Charles University. Accessed November 4, 2018. 8. List of Fellows of the American Mathematical Society, American Mathematical Society. Accessed November 4, 2018. 9. Editorial Board, Journal of Combinatorial Algebra. European Mathematical Society. Accessed November 4, 2018. 10. CONFERENCE ON GEOMETRIC AND COMBINATORIAL METHODS IN GROUP THEORY. In honor of Mark Sapir's 60th birthday Department of Mathematics, University of Illinois at Urbana–Champaign. Accessed November 4, 2018. 11. "Jenya Sapir at the Math Genealogy Project". Retrieved Feb 14, 2020.| 12. Jenya Sapir's webpage, Department of Mathematics of Binghamton University. Accessed November 4, 2018. 13. Sapir vs Aschcroft, Case No. 3:03-0326 (Middle District of Tenn. 2003), Judge Aleta A. Trauger. Order of August 13, 2003. LexisNexis. Accessed November 11, 2018. 14. Jim Patterson, Russian couple file lawsuit over INS delay. Plainview Daily Herald, April 24, 2003. Accessed November 11, 2018. 15. Birget, J.-C.; Ol'shanskii, A. Yu; Rips, E.; Sapir, M. V. (September 2002). "Isoperimetric Functions of Groups and Computational Complexity of the Word Problem". Annals of Mathematics. Second Series. 156 (2): 467. arXiv:math/9811106. doi:10.2307/3597196. JSTOR 3597196. MR 1933723. S2CID 14155715. 16. Sapir, Mark V.; Birget, Jean-Camille; Rips, Eliyahu (September 2002). "Isoperimetric and Isodiametric Functions of Groups". Annals of Mathematics. Second Series. 156 (2): 345. arXiv:math/9811105. doi:10.2307/3597195. JSTOR 3597195. MR 1933724. S2CID 119728458. 17. Ilya Kapovich (2005) Mathematical Reviews, MR1933723 and MR1933724. 18. Druţu, Cornelia; Sapir, Mark (September 2005). "Tree-graded spaces and asymptotic cones of groups". Topology. 44 (5): 959–1058. doi:10.1016/j.top.2005.03.003. MR 2153979. 19. Druţu, Cornelia; Sapir, Mark V. (February 2008). "Groups acting on tree-graded spaces and splittings of relatively hyperbolic groups". Advances in Mathematics. 217 (3): 1313–1367. doi:10.1016/j.aim.2007.08.012. MR 2383901. S2CID 10461978. 20. Olʹshanskii, Alexander Yu.; Sapir, Mark V. (2002). "Non-amenable finitely presented torsion-by-cyclic groups". Publications Mathématiques de l'IHÉS (96): 43–169. arXiv:math/0208237. Bibcode:2002math......8237O. MR 1985031. 21. Meakin, John; Sapir, Mark (1993). "Congruences on free monoids and submonoids of polycyclic monoids". Journal of the Australian Mathematical Society, Series A. 54 (2): 236–253. doi:10.1017/S1446788700037149. MR 1200795. 22. Guba, Victor; Sapir, Mark (1997). "Diagram groups". Memoirs of the American Mathematical Society. 130 (620). doi:10.1090/memo/0620. MR 1396957. External links • Mark Sapir's homepage at Vanderbilt University • Mark Sapir's mathematical blog at WordPress • Mark Sapir's entry at the Mathematics Genealogy Project Authority control International • ISNI • VIAF National • Germany • Israel • United States • Czech Republic • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH Other • IdRef
Wikipedia
Sara Billey Sara Cosette Billey (born February 6, 1968 in Alva, Oklahoma, United States) is an American mathematician working in algebraic combinatorics. She is known for her contributions on Schubert polynomials, singular loci of Schubert varieties, Kostant polynomials, and Kazhdan–Lusztig polynomials[2] often using computer verified proofs. She is currently a professor of mathematics at the University of Washington.[3] Sara Billey Born (1968-02-06) February 6, 1968 Alva, Oklahoma NationalityAmerican Alma materMassachusetts Institute of Technology University of California, San Diego AwardsPresidential Early Career Award for Scientists and Engineers[1] Scientific career FieldsMathematics InstitutionsUniversity of Washington Doctoral advisorAdriano Garsia Mark Haiman Billey did her undergraduate studies at the Massachusetts Institute of Technology, graduating in 1990.[3] She earned her Ph.D. in mathematics in 1994 from the University of California, San Diego, under the joint supervision of Adriano Garsia and Mark Haiman.[4] She returned to MIT as a postdoctoral researcher with Richard P. Stanley, and continued there as an assistant and associate professor until 2003, when she moved to the University of Washington.[3] In 2012, she became a fellow of the American Mathematical Society.[5] She also was an AMS Council member at large from 2005 to 2007.[6] Publications Selected books • Sara, Billey; Lakshmibai, V. (2000). Singular loci of Schubert varieties. Boston: Birkhäuser. ISBN 9780817640927. OCLC 44750779.[7] Selected articles • Billey, Sara; Haiman, Mark (1995). "Schubert polynomials for the classical groups". Journal of the American Mathematical Society. 8 (2): 443–482. doi:10.1090/s0894-0347-1995-1290232-1. ISSN 0894-0347. • Billey, Sara; Warrington, Gregory (2003). "Maximal singular loci of Schubert varieties in 𝑆𝐿(𝑛)/𝐵". Transactions of the American Mathematical Society. 355 (10): 3915–3945. doi:10.1090/s0002-9947-03-03019-8. ISSN 0002-9947. • Billey, Sara (1999). "Kostant polynomials and the cohomology ring for G/B". Duke Mathematical Journal. 96 (1): 205–224. doi:10.1215/S0012-7094-99-09606-0. ISSN 0012-7094. S2CID 16184223. References 1. "The Presidential Early Career Award for Scientists and Engineers: Recipient Details:Sara Billey". NSF. 2. "Billey, Sara C." MathSciNet. Retrieved 2017-04-10. 3. "Curriculum vitae" (PDF). September 26, 2017. Retrieved 2018-04-30. 4. Sara Billey at the Mathematics Genealogy Project 5. "List of Fellows of the American Mathematical Society". American Mathematical Society. Archived from the original on June 26, 2015. Retrieved July 31, 2015. 6. "AMS Committees". American Mathematical Society. Retrieved 2023-03-29. 7. Review of Singular loci of Schubert varieties by Michel Brion (2001), MR1782635 External links • "Sara Billey's Home Page". • "AWM 2013 Essay Contest Winner writes about Sara Billey". • Sara Billey publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Norway • France • BnF data • Israel • United States • Netherlands Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Sara Lombardo Sara Lombardo is an Italian applied mathematician whose research topics include nonlinear dynamics, rogue waves and solitons, integrable systems, and automorphic Lie algebras. She is Executive Dean of the School of Mathematical & Computer Sciences at Heriot-Watt University. Previously she was professor of mathematics at Loughborough University, and associate dean with teaching responsibilities.[1] Education and career Lombardo studied mathematical physics at Sapienza University of Rome, earning a laurea in physics in 2000.[1][2] She completed her PhD at the University of Leeds in 2004; her dissertation, Reductions of integrable equations and automorphic Lie algebras, was supervised by Alexander V. Mikhailov.[1][2][3] After postdoctoral research positions at the University of Leeds, University of Kent, Manchester University, and Vrije Universiteit Amsterdam,[4] she joined the academic staff at Northumbria University in 2011,[2] becoming head of mathematics there.[1] She moved to Loughborough University in 2017 [2][1] and to Heriot-Watt in 2022. Recognition Lombardo is a Fellow of the Higher Education Academy and a Fellow of the Institute of Mathematics and its Applications.[2] She was one of the 2020 winners of the Suffrage Science award in maths and computing.[5] References 1. "Professor Sara Lombardo", Mathematical Sciences Staff, Loughborough University, retrieved 2021-08-11 2. "Sara Lombardo", ORCID, retrieved 2021-08-11 3. Sara Lombardo at the Mathematics Genealogy Project 4. "Giant waves in the ocean: From sea monsters to science", nustem, Northumbria University, retrieved 2021-08-11 5. "Maths and Computing Awardee 2020: Professor Sara Lombardo", Suffrage Science awards, 29 July 2021, retrieved 2021-08-11 External links • Sara Lombardo publications indexed by Google Scholar Authority control: Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • Publons • ResearcherID
Wikipedia
Sarah-Marie Belcastro Sarah-Marie Belcastro (aka sarah-marie belcastro, born 1970 in San Diego) is an American mathematician and book author. She is an instructor at the Art of Problem Solving Online School[1] and is the director of Bryn Mawr's residential summer program MathILy.[2] Although her doctoral research was in algebraic geometry, she has also worked extensively in topological graph theory.[3] She is known for and has written extensively about mathematical knitting, and has co-edited three books on fiber mathematics.[4] She herself exclusively uses the form "sarah-marie belcastro".[5][6] Biography Belcastro was born in San Diego, CA in 1970, and grew up mostly in Andover, MA, and in Dubuque, IA.[6] She earned a B.S. (1991) in Mathematics and Astronomy from Haverford College, an M.S. (1993) from The University of Michigan, Ann Arbor, and a Ph.D. (1997) there for a thesis on “Picard Lattices of Families of K3 Surfaces” done with Igor Dolgachev.[7] Since 2012, she has also been an instructor at the Art of Problem Solving Online School.[1] Since 2013, she has been the director of Bryn Mawr College's residential summer program MathILy (serious Mathematics Infused with Levity).[2] She is also a guest faculty member at Sarah Lawrence College. She was Associate Editor for The College Mathematics Journal (2003—2019). She has also lectured frequently at the University of Massachusetts, Amherst since 2012.[8][9] Selected publications Books • Discrete Mathematics with Ducks (AK Peters, 2012; 2nd ed., CRC Press, 2019, ISBN 978-1-315-16767-1).[10] • Figuring Fibers, edited by belcastro and Carolyn Yackel, Providence, RI: American Mathematics Society, 2018.[11] • Crafting by Concepts: fiber arts and mathematics, edited by belcastro and Yackel. AK Peters, 2011.[12] • Making Mathematics with Needlework: Ten Papers and Ten Projects, edited by belcastro and Yackel. Wellesley, MA: AK Peters, 2007.[13] Journal papers • belcastro, sarah-marie (2021). "Color-induced subgraphs dual to Hamilton cycles of embedded cubic graphs" (PDF). Australasian Journal of Combinatorics. 81: 319–333. MR 4312576. • belcastro, sarah-marie (2016). "Small snarks and 6-chromatic triangulations on the Klein bottle" (PDF). Australasian Journal of Combinatorics. 65: 232–250. MR 3509660. • belcastro, sarah-marie; Haas, Ruth (2015). "Triangle-free uniquely 3-edge colorable cubic graphs". Contributions to Discrete Mathematics. 10 (2): 39–44. arXiv:1508.06934. doi:10.11575/cdm.v10i2.62320. MR 3499076. • Albertson, Michael O.; Alpert, Hannah; belcastro, sarah-marie; Haas, Ruth (2010). "Grünbaum colorings of toroidal triangulations". Journal of Graph Theory. 63 (1): 68–81. arXiv:0805.0394. doi:10.1002/jgt.20406. MR 2590325. S2CID 7177893. • belcastro, sarah-marie (2009). "Every topological surface can be knit: a proof". Journal of Mathematics and the Arts. 3 (2): 67–83. doi:10.1080/17513470902896561. MR 2553752. S2CID 120714012. • belcastro, sarah-marie; Kaminski, Jackie (2007). "Families of dot-product snarks on orientable surfaces of low genus". Graphs and Combinatorics. 23 (3): 229–240. doi:10.1007/s00373-007-0729-9. MR 2320577. S2CID 31347628. • belcastro, sarah-marie; Hull, Thomas C. (2002). "Modelling the folding of paper into three dimensions using affine transformations". Linear Algebra and Its Applications. 348 (1–3): 273–282. doi:10.1016/S0024-3795(01)00608-5. MR 1902132. References 1. AoPS Online Art of Problem Solving School 2. MathILy Bryn Mawr College 3. MathILy people MathILy.org 4. Adventures in Mathematical Knitting by Sarah-Marie Belcastro, American Scientist, 2021 5. website of dr. sarah-marie belcastro 6. dr. sarah-marie belcastro toroidalsnark.net 7. Sarah-Marie Belcastro at the Mathematics Genealogy Project 8. Curriculum Vitae September 2021 9. Faculty News Briefs University of Massachusetts, Amherst, June 2012 10. Reviews of Discrete Mathematics with Ducks: • Ashbacher, Charles (August 2012). "Review". MAA Reviews. • Székely, László A. zbMATH. Zbl 1250.05001.{{cite journal}}: CS1 maint: untitled periodical (link) 11. Reviews of Figuring Fibers: • Collins, Julia (July 2020). "Review" (PDF). London Mathematical Society Newsletter (489): 39–40. Zbl 07456182.{{cite journal}}: CS1 maint: Zbl (link) • Torrence, Eve (October 2019). Journal of Mathematics and the Arts. 14 (3): 283–284. doi:10.1080/17513472.2019.1666459. S2CID 209985329.{{cite journal}}: CS1 maint: untitled periodical (link) • West, Mckenzie (August 2019). "Review". MAA Reviews. • Wilmer, Elizabeth (September 2020). "Or/And: A Review of Figuring Fibers" (PDF). Notices of the American Mathematical Society. 67 (8): 1158–1161. doi:10.1090/noti2125. S2CID 225196886. 12. Reviews of Crafting by Concepts: • Babenko, Yuliya (March 2012). Journal of Mathematics and the Arts. 6 (1): 53–54. doi:10.1080/17513472.2011.642264. S2CID 123015866.{{cite journal}}: CS1 maint: untitled periodical (link) • Fortune, Mary (July 2013). The Mathematical Gazette. 97 (539): 382–383. doi:10.1017/S0025557200006422. JSTOR 24496858. S2CID 233362317.{{cite journal}}: CS1 maint: untitled periodical (link) • Habermann, Katharina (December 2011). Mitteilungen der Deutschen Mathematiker-Vereinigung. 19 (4): 216. doi:10.1515/dmvm-2011-0090. S2CID 177386607.{{cite journal}}: CS1 maint: untitled periodical (link) • Weinhold, Marcia Weller (November 2012). The Mathematics Teacher. 106 (4): 318. doi:10.5951/mathteacher.106.4.0318. JSTOR 10.5951/mathteacher.106.4.0318.{{cite journal}}: CS1 maint: untitled periodical (link) 13. Reviews of Making Mathematics with Needlework: • Atherley, Kate (Spring 2009). "Review". Cool stuff!. Knitty. • Cross, Alison (February 2008). "Review" (PDF). The London Mathematical Society Newsletter. 367: 28. • Fisher, Gwen (June 2008). Journal of Mathematics and the Arts. 2 (2): 101–103. doi:10.1080/17513470802222827. S2CID 121469834.{{cite journal}}: CS1 maint: untitled periodical (link) • Fortune, Mary (July 2010). The Mathematical Gazette. 94 (530): 378–379. doi:10.1017/s0025557200007014. JSTOR 25759714. S2CID 193323181.{{cite journal}}: CS1 maint: untitled periodical (link) • Goetting, Mary (November 2008). The Mathematics Teacher. 102 (4): 319. JSTOR 20876356.{{cite journal}}: CS1 maint: untitled periodical (link) • Hsu, Pao-Sheng (January–February 2010). "Review". AWM Newsletter. Association for Women in Mathematics. 40 (1): 20–23. • Peeva, Ketty. zbMATH. Zbl 1142.00003.{{cite journal}}: CS1 maint: untitled periodical (link) • Phillips, Anna Lena (2008). "Picking up stitches". American Scientist. 96 (3): 259. doi:10.1511/2008.71.3591. • Sipics, Michelle (December 2007). "Math in a material world". SIAM News. External links • Official home page • Sarah-Marie Belcastro publications indexed by Google Scholar Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Sarah Whatmore (geographer) Dame Sarah Jane Whatmore DBE FBA FAcSS (born 25 September 1959[1]) is a British geographer. She is a professor of environment and public policy at Oxford University. She is a professorial fellow at Keble College, moving from Linacre College in 2012.[2] She was associate head (research) of the Social Sciences Division of the university from 2014 to 2016, and became pro-vice chancellor (education) of Oxford in January 2017. From 2018 she has been head of the Social Sciences Division.[3] Dame Sarah Whatmore DBE FBA FAcSS Born Aldershot, Hampshire NationalityBritish Alma materUniversity College London Known forCritical geography Scientific career FieldsHuman-Environment geography, critical geography InstitutionsOxford University ThesisThe 'other half' of the family farm: an analysis of the position of 'farm wives' in the familial gender division of labor on the farm (1988) Doctoral advisorRichard Munton Background Born in Aldershot, Hampshire into a military family, Whatmore moved often - including Germany, Cyprus, and Hong Kong.[4] She studied geography at University College London (BA 1981), has an MPhil (Town Planning) in 1983 (Financial institutions and the ownership of agricultural land) and worked at the Greater London Council. She returned to UCL for a PhD supervised by Richard Munton (The 'other half' of the family farm: an analysis of the position of 'farm wives' in the familial gender division of labor on the farm, 1988) and lectured at Leeds University, Bristol University (1989-2001) and the Open University (2001-2004).[5] She lives in Upton, Oxfordshire.[4] Scholarship Whatmore began studying rural geography, gender and alternative food networks, moving into the critical geography of environmental issues at the end of the 1990s. She has questioned Marxist materialist approaches in favour of actor-network theory and feminist science studies. Her approach, laid out in her 2002 book Hybrid Geographies,[6] attempts to develop what she terms "more than human" modes of inquiry, and question the relationship between science and democracy. Hybrid Geographies has been cited over 1,800 times.[7] Her research focuses on the treatment of evidence and role of expertise in environmental governance, against growing reliance on computer modelling techniques. It is characterized by a commitment to experimental and collaborative research practices that bring the different knowledge competences of social and natural scientists into play with those of diverse local publics living with environmental risks and hazards like floods and droughts. Her ideas were developed further in Political Matter (Whatmore & Braun eds. 2010). Her critical ideas have been well received by theorists, but less so by policy-oriented environmental thinkers and traditional geographers less inclined to "theorise" human-environment relationships. Nonetheless, she has been a member of the Science Advisory Council to the Department for Environment, Food and Rural Affairs (DEFRA) and chair of its Social Science Expert Group; a member of the Science Advisory Group established to advise the Cabinet Office’s National Flood Resilience Review (2016), and as a member of the board of the Parliamentary Office of Science and Technology. Honours and awards • 2013: Ellen Churchill Semple award, Department of Geography, University of Kentucky[8] • 2014: Fellow of the British Academy, the United Kingdom's national academy for the humanities and social sciences.[9] • Fellow, Academy of Social Sciences • DSc, University of Bristol. • 2020: Dame Commander of the Order of the British Empire (DBE) in the 2020 New Year Honours for services to the study of environmental policy[10] Selected bibliography • Whatmore, Sarah; Braun, Bruce (2010). Political matter technoscience, democracy, and public life. Minneapolis, Minnesota: University of Minnesota Press. ISBN 9780816670895. • Gregory, Derek; Johnston, Ron; Pratt, Geraldine; Watts, Michael; Whatmore, Sarah, eds. (2009). The dictionary of human geography (5th ed.). Chichester (U.K.): Wiley-Blackwell. ISBN 978-1-4051-3288-6. • Nigel Thrift and Sarah Whatmore (eds.). 2004. Cultural geography: critical concepts in the social sciences. London: Routledge. • Pryke, Michael; Rose, Gillian; Whatmore, Sarah (2003). Using social theory : thinking through research (Reprint. ed.). London: SAGE Publications in association with the Open University. ISBN 9780761943778. • Whatmore, Sarah (2002). Hybrid geographies: natures, cultures, spaces. London Thousand Oaks, California: SAGE Publications. ISBN 9780761965671. • Sarah Whatmore, Terry Marsden, Philip Lowe (eds.) 1994. Gender and rurality. London: David Fulton Publishers. • Philip Lowe, Terry Marsden, Sarah Whatmore (eds.). 1994. Regulating agriculture. London: David Fulton Publishers. • Sarah Whatmore. 1991. Farming women: gender, work and family enterprise. Basingstoke: Macmillan. • Terry Marsden, Philip Lowe, Sarah Whatmore (eds) 1992. Labour and locality: uneven development and the rural labour process". London: David Fulton Publishers. • Terry Marsden, Philip Lowe, Sarah Whatmore (eds.). 1990. Rural restructuring: global processes and their responses. London: David Fulton Publishers. • Philip Lowe, Terry Marsden, Sarah Whatmore (eds.). 1990. Technological change and the rural environment. London: David Fulton Publishers. References 1. "Whatmore, Prof. Sarah Jane", Who's Who (online edition, Oxford University Press, December 2017). Retrieved 5 July 2018. 2. "Keble welcomes Professor Sarah Whatmore - Keble Geography". www.keble-oxford-geography.info. 3. "Sarah Whatmore". University of Oxford Social sciences Division. Retrieved 4 October 2018. 4. http://mpegmedia.abc.net.au/classic/midday/201404/miv-2014-04-10.mp3 5. White, Chris. "Professor Sarah Whatmore - Staff - School of Geography and the Environment - University of Oxford". www.geog.ox.ac.uk. 6. Whatmore, Sarah (2002). Hybrid geographies: natures, cultures, spaces. London Thousand Oaks, California: SAGE Publications. ISBN 9780761965671. 7. "Google Scholar". scholar.google.com.au. 8. Ellen Churchill Semple Day (accessed 30 June 2015) 9. "British Academy announces 42 new fellows". Times Higher Education. 18 July 2014. Retrieved 18 July 2014. 10. "No. 62866". The London Gazette (Supplement). 28 December 2019. p. N8. Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • Belgium • United States • Latvia • Czech Republic • Australia • Netherlands Academics • CiNii • Scopus People • Trove Other • IdRef
Wikipedia
Sarah Witherspoon Sarah Jane Witherspoon is an American mathematician interested in topics in abstract algebra, including Hochschild cohomology[SW99] and quantum groups.[W96][BW04] She is a professor of mathematics at Texas A&M University[1] Sarah Witherspoon Born Sarah Jane Witherspoon NationalityAmerican Alma materArizona State University University of Chicago Scientific career FieldsMathematics InstitutionsUniversity of Toronto Mills College University of Wisconsin–Madison Mount Holyoke College University of Massachusetts Amherst Amherst College Texas A&M University Education Witherspoon graduated from Arizona State University in 1988,[1] where she earned the Charles Wexler Mathematics Prize as the best mathematics student at ASU that year.[2] She went on to graduate study in mathematics at the University of Chicago, and completed her Ph.D. in 1994.[1] Her dissertation, supervised by Jonathan Lazare Alperin, was The Representation Ring of the Quantum Double of a Finite Group.[3] Career Witherspoon taught at the University of Toronto from 1994 to 1998. After holding visiting assistant professorships at Mills College, the University of Wisconsin–Madison, Mount Holyoke College, the University of Massachusetts Amherst, and Amherst College, she joined the Texas A&M faculty in 2004.[1] Honors and awards She was elected to the 2018 class of fellows of the American Mathematical Society, "for contributions to representation theory and cohomology of Hopf algebras, quantum groups, and related objects, and for service to the profession and mentoring".[4] She was named MSRI Simons Professor for Spring 2020.[5] Selected publications W96. Witherspoon, S. J. (1996), "The representation ring of the quantum double of a finite group", Journal of Algebra, 179 (1): 305–329, doi:10.1006/jabr.1996.0014, hdl:10338.dmlcz/140609, MR 1367852 SW99. Siegel, Stephen F.; Witherspoon, Sarah J. (1999), "The Hochschild cohomology ring of a group algebra", Proceedings of the London Mathematical Society, Third Series, 79 (1): 131–157, doi:10.1112/S0024611599011958, MR 1687539, S2CID 53465181 BW04. Benkart, Georgia; Witherspoon, Sarah (2004), "Two-parameter quantum groups and Drinfel'd doubles", Algebras and Representation Theory, 7 (3): 261–286, arXiv:math/0011064, doi:10.1023/B:ALGE.0000031151.86090.2e, MR 2070408, S2CID 2102411 W19. Witherspoon, Sarah J. (2019), Hochschild Cohomology for Algebras, Graduate Studies in Mathematics, vol. 204, American Mathematical Society, ISBN 978-1-4704-4931-5 References 1. Curriculum vitae, retrieved 2017-11-03 2. Charles Wexler Awards, Arizona State University School of Mathematical and Statistical Sciences, retrieved 2017-11-04 3. Sarah Witherspoon at the Mathematics Genealogy Project 4. 2018 Class of the Fellows of the AMS, American Mathematical Society, retrieved 2017-11-03 5. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-06-07. External links • Home page Authority control Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Leonard Sarason Leonard Sarason (1925 – September 24, 1994) was a music composer, a pianist, and a mathematician. He earned a master's degree music composition from Yale University, supervised by Paul Hindemith.[1][2][3] After a doctorate in Mathematics at New York University supervised by Kurt Otto Friedrichs[4] he taught mathematics at Stanford University and the University of Washington.[1][2] His mathematical research concerned partial differential equations.[1] Media • Piano Sonata (1948) References 1. In memory 3/95, Univ. of Washington, retrieved 2015-02-12. 2. The Al Goldstein collection in the Pandora Music repository at http://www.ibiblio.org/pandora/mp3/contrib/Martha_Goldstein_Live/Readme 3. Hersh, Reuben; John-Steiner, Vera (2010), Loving and Hating Mathematics: Challenging the Myths of Mathematical Life, Princeton University Press, p. 80, ISBN 9781400836116. 4. Leonard Sarason at the Mathematics Genealogy Project Authority control Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Artists • MusicBrainz
Wikipedia
Sarason interpolation theorem In mathematics complex analysis, the Sarason interpolation theorem, introduced by Sarason (1967), is a generalization of the Caratheodory interpolation theorem and Nevanlinna–Pick interpolation. References • Sarason, Donald (1967). "Generalized interpolation in H∞". Transactions of the American Mathematical Society. 127 (2): 179–203. doi:10.2307/1994641. ISSN 0002-9947. JSTOR 1994641. MR 0208383.
Wikipedia
Arthur Sard Arthur Sard (28 July 1909, New York City – 31 August 1980, Basel) was an American mathematician, famous for his work in differential topology and in spline interpolation. His fame stems primarily from Sard's theorem, which says that the set of critical values of a differential function which has sufficiently many derivatives has measure zero.[1] Life and career Arthur Sard was born and grew up in New York City and spent most of his adult life there. He attended the Friends Seminary, a private school in Manhattan, and went to college at Harvard University, where he received in 1931 his bachelor's degree, in 1932 his master's degree, and in 1936 his PhD under the direction of Marston Morse.[1] Sard's PhD thesis has the title The measure of the critical values of functions.[2] He was a member of the first faculty members at the then newly founded Queens College, where he worked from 1937 to 1970.[1] During WWII Sard worked as a member, under the auspices of the Applied Mathematics Panel, of the Applied Mathematics Group of Columbia University (AMG-C), especially in support of fire control for machine guns mounted on bombers. Saunders Mac Lane wrote concerning Sard: “His judicious judgments kept AMG-C on a straight course, […]”.[3] Sard retired as professor emeritus in 1970 at Queens College and then worked at La Jolla, where he spent five years as a research associate in the mathematics department of the University of California, San Diego. In 1975 he went to Binningen near Basel and taught at various European universities and research institutes. In 1978 he accepted an invitation from the Soviet Academy of Sciences to be a guest lecturer. In 1978 and 1979 he was a guest professor at the University of Siegen. Arthur Sard died on 31 August 1980 in Basel.[1] From 1938 until his death Sard published almost forty research articles in refereed mathematical journals.[4] Also he wrote two monographs: in 1963 the book Linear Approximation and in 1971, in collaboration with Sol Weintraub, A Book of Splines.[4] According to the book review from the Deutsche Mathematiker-Vereinigung the content-rich („inhaltsreiche“) Linear Approximation is an important contribution to the theory of approximation of integrals, derivatives, function values, and sums („ein wesentlicher Beitrag zur Theorie der Approximation von Integralen, Ableitungen, Funktionswerten und Summen“).[5] Works Sard published thirty-eight research articles and the two following monographs: • Arthur Sard: Linear approximation. 2nd edn. American Mathematical Society, Providence, Rhode Island 1963, ISBN 0-8218-1509-1 (Mathematical Surveys and Monographs. Vol. 9). • Arthur Sard, Sol Weintraub: A Book of Splines. John Wiley & Sons Inc, New York 1971, ISBN 0-471-75415-3 Articles • Sard, Arthur (1942), "The measure of the critical values of differentiable maps", Bulletin of the American Mathematical Society, 48 (12): 883–890, doi:10.1090/S0002-9904-1942-07811-6, MR 0007523, Zbl 0063.06720 • "The equivalence of n-measure and Lebesgue measure in En", Bull. Amer. Math. Soc., 49 (10): 758–759, 1943, doi:10.1090/s0002-9904-1943-08025-1, MR 0008837 • "The remainder in approximations by moving averages", Bull. Amer. Math. Soc., 54 (8): 788–792, 1948, doi:10.1090/s0002-9904-1948-09081-4, MR 0028366 • "Remainders as integrals of partial derivatives", Proc. Amer. Math. Soc., 3 (5): 732–741, 1952, doi:10.1090/s0002-9939-1952-0050645-6, MR 0050645 • "Approximation and variance", Trans. Amer. Math. Soc., 73 (3): 428–446, 1952, doi:10.1090/s0002-9947-1952-0052688-x, MR 0052688 • Sard, Arthur (1965), "Hausdorff Measure of Critical Images on Banach Manifolds", American Journal of Mathematics, 87 (1): 158–174, doi:10.2307/2373229, JSTOR 2373229, MR 0173748, Zbl 0137.42501 and also Sard, Arthur (1965), "Errata to Hausdorff measures of critical images on Banach manifolds", American Journal of Mathematics, 87 (3): 158–174, doi:10.2307/2373229, JSTOR 2373074, MR 0180649, Zbl 0137.42501 • "Function spaces", Bull. Amer. Math. Soc., 71 (3, Part 1): 397–418, 1965, doi:10.1090/s0002-9904-1965-11282-4, MR 0185429 • "A theory of cotypes", Bull. Amer. Math. Soc., 75 (5): 936–940, 1969, doi:10.1090/s0002-9904-1969-12302-5, MR 0245051 Sources • Franz-Jürgen Delvos, Walter Schempp: Arthur Sard – In Memoriam. In: Walter Schempp, Karl Zeller (eds.): Multivariate Approximation Theory II, Proceedings of the Conference held at the Mathematical Research Institute at Oberwolfach, Black Forest, February 8–12, 1982. Birkhäuser Verlag, Basel 1982, ISBN 3-7643-1373-0 (International Series of Numerical Mathematics. Vol. 61), pp. 23–24. References 1. Delvos, Schempp (1982) 2. Notes. In: Bulletin of the American Mathematical Society. Vol. 43, No. 5, 1937, ISSN 1088-9485, (PDF) 3. Saunders Mac Lane: Requiem for the Skillful. In: Notices of the American Mathematical Society. Vol. 44, No. 2, 1997, ISSN 0002-9920, pp. 207–208 (PDF; 43 kB). 4. News and Notices. In: The American Mathematical Monthly, Vol. 88, No. 1, January 1981, Mathematical Association of America, ISSN 0002-9890, pp. 81–82 (online from JSTOR) 5. Manfred v. Golitschek, Paul Otto Runck: A. Sard, Linear Approximation. In: Jahresbericht der Deutschen Mathematiker-Vereinigung, Nr. 73, B. G. Teubner Verlag, Stuttgart 1971/72, ISSN 0012-0456, S. 31–33 (online from DigiZeitschriften in German) External links • Arthur Sard at the Mathematics Genealogy Project Authority control International • FAST • ISNI • VIAF National • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Sargan–Hansen test The Sargan–Hansen test or Sargan's $J$ test is a statistical test used for testing over-identifying restrictions in a statistical model. It was proposed by John Denis Sargan in 1958,[1] and several variants were derived by him in 1975.[2] Lars Peter Hansen re-worked through the derivations and showed that it can be extended to general non-linear GMM in a time series context.[3] The Sargan test is based on the assumption that model parameters are identified via a priori restrictions on the coefficients, and tests the validity of over-identifying restrictions. The test statistic can be computed from residuals from instrumental variables regression by constructing a quadratic form based on the cross-product of the residuals and exogenous variables.[4]: 132–33  Under the null hypothesis that the over-identifying restrictions are valid, the statistic is asymptotically distributed as a chi-square variable with $(m-k)$ degrees of freedom (where $m$ is the number of instruments and $k$ is the number of endogenous variables). See also • Durbin–Wu–Hausman test References 1. Sargan, J. D. (1958). "The Estimation of Economic Relationships Using Instrumental Variables". Econometrica. 26 (3): 393–415. doi:10.2307/1907619. JSTOR 1907619. 2. Sargan, J. D. (1988) [1975]. "Testing for misspecification after estimating using instrumental variables". Contributions to Econometrics. New York: Cambridge University Press. ISBN 0-521-32570-6. 3. Hansen, Lars Peter (1982). "Large Sample Properties of Generalized Method of Moments Estimators". Econometrica. 50 (4): 1029–1054. doi:10.2307/1912775. JSTOR 1912775. 4. Sargan, J. D. (1988). Lectures on Advanced Econometric Theory. Oxford: Basil Blackwell. ISBN 0-631-14956-2. Further reading • Davidson, Russell; McKinnon, James G. (1993). Estimation and Inference in Econometrics. New York: Oxford University Press. pp. 616–620. ISBN 0-19-506011-3. • Verbeek, Marno (2004). A Guide to Modern Econometrics (2nd ed.). New York: John Wiley & Sons. pp. 142–158. ISBN 0-470-85773-0. • Kitamura, Yuichi (2006). "Specification Tests with Instrumental Variable and Rank Deficiency". In Corbae, Dean; et al. (eds.). Econometric Theory and Practice: Frontiers of Analysis and Applied Research. New York: Cambridge University Press. pp. 59–124. ISBN 0-521-80723-9.
Wikipedia
Sariel Har-Peled Sariel Har-Peled (born July 14, 1971, in Jerusalem)[1] is an Israeli–American computer scientist known for his research in computational geometry. He is a Donald Biggar Willett Professor in Engineering at the University of Illinois at Urbana–Champaign.[2] Har-Peled was a student at Tel Aviv University, where he earned a bachelor's degree in mathematics and computer science in 1993, a master's degree in computer science in 1995, and a Ph.D. in 1999. His master's thesis, The Complexity of Many Cells in the Overlay of Many Arrangements, and his doctoral dissertation, Geometric Approximation Algorithms and Randomized Algorithms for Planar Arrangements, were both supervised by Micha Sharir.[1][3] After postdoctoral research at Duke University, he joined the University of Illinois in 2000.[1] He was named Willett Professor in 2016.[2] Har-Peled is the author of a book on approximation algorithms in computational geometry, Geometric approximation algorithms (American Mathematical Society, 2011).[4][5] References 1. Curriculum vitae (PDF), July 30, 2018, retrieved 2018-09-22 2. Chairs and Professorships, Illinois Computer Science, retrieved 2018-09-22 3. Sariel Har-Peled at the Mathematics Genealogy Project 4. Har-Peled authors text on geometric approximation algorithms, Illinois Computer Science, July 12, 2011, retrieved 2018-09-22 5. Stephen, Tamon, "Review of Geometric approximation algorithms", Mathematical Reviews, MR 2760023 External links • Home page • Sariel Har-Peled publications indexed by Google Scholar Authority control National • Germany • Israel • United States Academics • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH
Wikipedia
Autoregressive integrated moving average In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.e., the trend).[1] When the seasonality shows in a time series, the seasonal-differencing[2] could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem,[3][4][5] is theoretically sufficient to describe a regular (a.k.a. purely nondeterministic[5]) wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model.[6] Note that if the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process[4]), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework so that it is eliminated by the seasonal differencing. The AR part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The MA part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past.[7] The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible. Non-seasonal ARIMA models are generally denoted ARIMA(p,d,q) where parameters p, d, and q are non-negative integers, p is the order (number of time lags) of the autoregressive model, d is the degree of differencing (the number of times the data have had past values subtracted), and q is the order of the moving-average model. Seasonal ARIMA models are usually denoted ARIMA(p,d,q)(P,D,Q)m, where m refers to the number of periods in each season, and the uppercase P,D,Q refer to the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model.[8][2] When two out of the three terms are zeros, the model may be referred to based on the non-zero parameter, dropping "AR", "I" or "MA" from the acronym describing the model. For example, ${\text{ARIMA}}(1,0,0)$ is AR(1), ${\text{ARIMA}}(0,1,0)$ is I(1), and ${\text{ARIMA}}(0,0,1)$ is MA(1). ARIMA models can be estimated following the Box–Jenkins approach. Definition Given time series data Xt where t is an integer index and the Xt are real numbers, an ${\text{ARIMA}}(p',q)$ model is given by $X_{t}-\alpha _{1}X_{t-1}-\dots -\alpha _{p'}X_{t-p'}=\varepsilon _{t}+\theta _{1}\varepsilon _{t-1}+\cdots +\theta _{q}\varepsilon _{t-q},$ or equivalently by $\left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,$ where $L$ is the lag operator, the $\alpha _{i}$ are the parameters of the autoregressive part of the model, the $\theta _{i}$ are the parameters of the moving average part and the $\varepsilon _{t}$ are error terms. The error terms $\varepsilon _{t}$ are generally assumed to be independent, identically distributed variables sampled from a normal distribution with zero mean. Assume now that the polynomial $\textstyle \left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)$ has a unit root (a factor $(1-L)$) of multiplicity d. Then it can be rewritten as: $\left(1-\sum _{i=1}^{p'}\alpha _{i}L^{i}\right)=\left(1-\sum _{i=1}^{p'-d}\varphi _{i}L^{i}\right)\left(1-L\right)^{d}.$ An ARIMA(p,d,q) process expresses this polynomial factorisation property with p=p'−d, and is given by: $\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)(1-L)^{d}X_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,$ and thus can be thought as a particular case of an ARMA(p+d,q) process having the autoregressive polynomial with d unit roots. (For this reason, no process that is accurately described by an ARIMA model with d > 0 is wide-sense stationary.) The above can be generalized as follows. $\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)(1-L)^{d}X_{t}=\delta +\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}.\,$ This defines an ARIMA(p,d,q) process with drift ${\frac {\delta }{1-\sum \varphi _{i}}}$. Other special forms The explicit identification of the factorization of the autoregression polynomial into factors as above can be extended to other cases, firstly to apply to the moving average polynomial and secondly to include other special factors. For example, having a factor $(1-L^{s})$ in a model is one way of including a non-stationary seasonality of period s into the model; this factor has the effect of re-expressing the data as changes from s periods ago. Another example is the factor $\left(1-{\sqrt {3}}L+L^{2}\right)$, which includes a (non-stationary) seasonality of period 2. The effect of the first type of factor is to allow each season's value to drift separately over time, whereas with the second type values for adjacent seasons move together. Identification and specification of appropriate factors in an ARIMA model can be an important step in modeling as it can allow a reduction in the overall number of parameters to be estimated while allowing the imposition on the model of types of behavior that logic and experience suggest should be there. Differencing A stationary time series's properties do not depend on the time at which the series is observed. Specifically, for a wide-sense stationary time series, the mean and the variance/autocovariance keep constant over time. Differencing in statistics is a transformation applied to a non-stationary time-series in order to make it stationary in the mean sense (viz., to remove the non-constant trend), but having nothing to do with the non-stationarity of the variance or autocovariance. Likewise, the seasonal differencing is applied to a seasonal time-series to remove the seasonal component. From the perspective of signal processing, especially the Fourier spectral analysis theory, the trend is the low-frequency part in the spectrum of a non-stationary time series, while the season is the periodic-frequency part in the spectrum of it. Therefore, the differencing works as a high-pass (i.e., low-stop) filter and the seasonal-differencing as a comb filter to suppress the low-frequency trend and the periodic-frequency season in the spectrum domain (rather than directly in the time domain), respectively.[6] To difference the data, the difference between consecutive observations is computed. Mathematically, this is shown as $y_{t}'=y_{t}-y_{t-1}\,$ Differencing removes the changes in the level of a time series, eliminating trend and seasonality and consequently stabilizing the mean of the time series.[6] Sometimes it may be necessary to difference the data a second time to obtain a stationary time series, which is referred to as second-order differencing: ${\begin{aligned}y_{t}^{*}&=y_{t}'-y_{t-1}'\\&=(y_{t}-y_{t-1})-(y_{t-1}-y_{t-2})\\&=y_{t}-2y_{t-1}+y_{t-2}\end{aligned}}$ Another method of differencing data is seasonal differencing, which involves computing the difference between an observation and the corresponding observation in the previous season e.g a year. This is shown as: $y_{t}'=y_{t}-y_{t-m}\quad {\text{where }}m={\text{duration of season}}.$ The differenced data are then used for the estimation of an ARMA model. Examples Some well-known special cases arise naturally or are mathematically equivalent to other popular forecasting models. For example: • An ARIMA(0, 1, 0) model (or I(1) model) is given by $X_{t}=X_{t-1}+\varepsilon _{t}$ — which is simply a random walk. • An ARIMA(0, 1, 0) with a constant, given by $X_{t}=c+X_{t-1}+\varepsilon _{t}$ — which is a random walk with drift. • An ARIMA(0, 0, 0) model is a white noise model. • An ARIMA(0, 1, 2) model is a Damped Holt's model. • An ARIMA(0, 1, 1) model without constant is a basic exponential smoothing model.[9] • An ARIMA(0, 2, 2) model is given by $X_{t}=2X_{t-1}-X_{t-2}+(\alpha +\beta -2)\varepsilon _{t-1}+(1-\alpha )\varepsilon _{t-2}+\varepsilon _{t}$ — which is equivalent to Holt's linear method with additive errors, or double exponential smoothing.[9] Choosing the order The order p and q can be determined using the sample autocorrelation function (ACF), partial autocorrelation function (PACF), and/or extended autocorrelation function (EACF) method.[10] Other alternative methods include AIC, BIC, etc.[10] To determine the order of a non-seasonal ARIMA model, a useful criterion is the Akaike information criterion (AIC). It is written as ${\text{AIC}}=-2\log(L)+2(p+q+k),$ where L is the likelihood of the data, p is the order of the autoregressive part and q is the order of the moving average part. The k represents the intercept of the ARIMA model. For AIC, if k = 1 then there is an intercept in the ARIMA model (c ≠ 0) and if k = 0 then there is no intercept in the ARIMA model (c = 0). The corrected AIC for ARIMA models can be written as ${\text{AICc}}={\text{AIC}}+{\frac {2(p+q+k)(p+q+k+1)}{T-p-q-k-1}}.$ The Bayesian Information Criterion (BIC) can be written as ${\text{BIC}}={\text{AIC}}+((\log T)-2)(p+q+k).$ The objective is to minimize the AIC, AICc or BIC values for a good model. The lower the value of one of these criteria for a range of models being investigated, the better the model will suit the data. The AIC and the BIC are used for two completely different purposes. While the AIC tries to approximate models towards the reality of the situation, the BIC attempts to find the perfect fit. The BIC approach is often criticized as there never is a perfect fit to real-life complex data; however, it is still a useful method for selection as it penalizes models more heavily for having more parameters than the AIC would. AICc can only be used to compare ARIMA models with the same orders of differencing. For ARIMAs with different orders of differencing, RMSE can be used for model comparison. Estimation of coefficients Forecasts using ARIMA models The ARIMA model can be viewed as a "cascade" of two models. The first is non-stationary: $Y_{t}=(1-L)^{d}X_{t}$ while the second is wide-sense stationary: $\left(1-\sum _{i=1}^{p}\varphi _{i}L^{i}\right)Y_{t}=\left(1+\sum _{i=1}^{q}\theta _{i}L^{i}\right)\varepsilon _{t}\,.$ Now forecasts can be made for the process $Y_{t}$, using a generalization of the method of autoregressive forecasting. Forecast intervals The forecast intervals (confidence intervals for forecasts) for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these assumptions does not hold, then the forecast intervals may be incorrect. For this reason, researchers plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals. 95% forecast interval: ${\hat {y}}_{T+h\,\mid \,T}\pm 1.96{\sqrt {v_{T+h\,\mid \,T}}}$, where $v_{T+h\mid T}$ is the variance of $y_{T+h}\mid y_{1},\dots ,y_{T}$. For $h=1$, $v_{T+h\,\mid \,T}={\hat {\sigma }}^{2}$ for all ARIMA models regardless of parameters and orders. For ARIMA(0,0,q), $y_{t}=e_{t}+\sum _{i=1}^{q}\theta _{i}e_{t-i}.$ $v_{T+h\,\mid \,T}={\hat {\sigma }}^{2}\left[1+\sum _{i=1}^{h-1}\theta _{i}e_{t-i}\right],{\text{ for }}h=2,3,\ldots $ In general, forecast intervals from ARIMA models will increase as the forecast horizon increases. Variations and extensions A number of variations on the ARIMA model are commonly employed. If multiple time series are used then the $X_{t}$ can be thought of as vectors and a VARIMA model may be appropriate. Sometimes a seasonal effect is suspected in the model; in that case, it is generally considered better to use a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model.[11] If the time-series is suspected to exhibit long-range dependence, then the d parameter may be allowed to have non-integer values in an autoregressive fractionally integrated moving average model, which is also called a Fractional ARIMA (FARIMA or ARFIMA) model. Software implementations Various packages that apply methodology like Box–Jenkins parameter optimization are available to find the right parameters for the ARIMA model. • EViews: has extensive ARIMA and SARIMA capabilities. • Julia: contains an ARIMA implementation in the TimeModels package[12] • Mathematica: includes ARIMAProcess function. • MATLAB: the Econometrics Toolbox includes ARIMA models and regression with ARIMA errors • NCSS: includes several procedures for ARIMA fitting and forecasting.[13][14][15] • Python: the "statsmodels" package includes models for time series analysis – univariate time series analysis: AR, ARIMA – vector autoregressive models, VAR and structural VAR – descriptive statistics and process models for time series analysis. • R: the standard R stats package includes an arima function, which is documented in "ARIMA Modelling of Time Series". Besides the ${\text{ARIMA}}(p,d,q)$ part, the function also includes seasonal factors, an intercept term, and exogenous variables (xreg, called "external regressors"). The package astsa has scripts such as sarima to estimate seasonal or nonseasonal models and sarima.sim to simulate from these models. The CRAN task view on Time Series is the reference with many more links. The "forecast" package in R can automatically select an ARIMA model for a given time series with the auto.arima() function [that can often give questionable results] and can also simulate seasonal and non-seasonal ARIMA models with its simulate.Arima() function.[16] • Ruby: the "statsample-timeseries" gem is used for time series analysis, including ARIMA models and Kalman Filtering. • JavaScript: the "arima" package includes models for time series analysis and forecasting (ARIMA, SARIMA, SARIMAX, AutoARIMA) • C: the "ctsa" package includes ARIMA, SARIMA, SARIMAX, AutoARIMA and multiple methods for time series analysis. • SAFE TOOLBOXES: includes ARIMA modelling and regression with ARIMA errors. • SAS: includes extensive ARIMA processing in its Econometric and Time Series Analysis system: SAS/ETS. • IBM SPSS: includes ARIMA modeling in the Professional and Premium editions of its Statistics package as well as its Modeler package. The default Expert Modeler feature evaluates a range of seasonal and non-seasonal autoregressive (p), integrated (d), and moving average (q) settings and seven exponential smoothing models. The Expert Modeler can also transform the target time-series data into its square root or natural log. The user also has the option to restrict the Expert Modeler to ARIMA models, or to manually enter ARIMA nonseasonal and seasonal p, d, and q settings without Expert Modeler. Automatic outlier detection is available for seven types of outliers, and the detected outliers will be accommodated in the time-series model if this feature is selected. • SAP: the APO-FCS package[17] in SAP ERP from SAP allows creation and fitting of ARIMA models using the Box–Jenkins methodology. • SQL Server Analysis Services: from Microsoft includes ARIMA as a Data Mining algorithm. • Stata includes ARIMA modelling (using its arima command) as of Stata 9. • StatSim: includes ARIMA models in the Forecast web app. • Teradata Vantage has the ARIMA function as part of its machine learning engine. • TOL (Time Oriented Language) is designed to model ARIMA models (including SARIMA, ARIMAX and DSARIMAX variants) . • Scala: spark-timeseries library contains ARIMA implementation for Scala, Java and Python. Implementation is designed to run on Apache Spark. • PostgreSQL/MadLib: Time Series Analysis/ARIMA. • X-12-ARIMA: from the US Bureau of the Census See also • Autocorrelation • ARMA • Partial autocorrelation • Finite impulse response • Infinite impulse response References 1. For further information on Stationarity and Differencing see https://www.otexts.org/fpp/8/1 2. Hyndman, Rob J; Athanasopoulos, George. 8.9 Seasonal ARIMA models. Retrieved 19 May 2015. {{cite book}}: |website= ignored (help) 3. Hamilton, James (1994). Time Series Analysis. Princeton University Press. ISBN 9780691042893. 4. Papoulis, Athanasios (2002). Probability, Random Variables, and Stochastic processes. Tata McGraw-Hill Education. 5. Triacca, Umberto (19 Feb 2021). "The Wold Decomposition Theorem" (PDF). Archived (PDF) from the original on 2016-03-27. 6. Wang, Shixiong; Li, Chongshou; Lim, Andrew (2019-12-18). "Why Are the ARIMA and SARIMA not Sufficient". arXiv:1904.07632 [stat.AP]. 7. Box, George E. P. (2015). Time Series Analysis: Forecasting and Control. WILEY. ISBN 978-1-118-67502-1. 8. "Notation for ARIMA Models". Time Series Forecasting System. SAS Institute. Retrieved 19 May 2015. 9. "Introduction to ARIMA models". people.duke.edu. Retrieved 2016-06-05. 10. Missouri State University. "Model Specification, Time Series Analysis" (PDF). 11. Swain, S; et al. (2018). "Development of an ARIMA Model for Monthly Rainfall Forecasting over Khordha District, Odisha, India". Recent Findings in Intelligent Computing Techniques. pp. 325–331). doi:10.1007/978-981-10-8636-6_34. ISBN 978-981-10-8635-9. {{cite book}}: |journal= ignored (help) 12. TimeModels.jl www.github.com 13. ARIMA in NCSS, 14. Automatic ARMA in NCSS, 15. Autocorrelations and Partial Autocorrelations in NCSS 16. 8.7 ARIMA modelling in R | OTexts. Retrieved 2016-05-12. {{cite book}}: |website= ignored (help) 17. "Box Jenkins model". SAP. Retrieved 8 March 2013. Further reading • Asteriou, Dimitros; Hall, Stephen G. (2011). "ARIMA Models and the Box–Jenkins Methodology". Applied Econometrics (Second ed.). Palgrave MacMillan. pp. 265–286. ISBN 978-0-230-27182-1. • Mills, Terence C. (1990). Time Series Techniques for Economists. Cambridge University Press. ISBN 978-0-521-34339-8. • Percival, Donald B.; Walden, Andrew T. (1993). Spectral Analysis for Physical Applications. Cambridge University Press. ISBN 978-0-521-35532-2. • Shumway R.H. and Stoffer, D.S. (2017). Time Series Analysis and Its Applications: With R Examples. Springer. DOI: 10.1007/978-3-319-52452-8 • ARIMA Models in R. Become an expert in fitting ARIMA (autoregressive integrated moving average) models to time series data using R. External links • The US Census Bureau uses ARIMA for "seasonally adjusted" data (programs, docs, and papers here) • Lecture notes on ARIMA models (Robert Nau, Duke University) Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Rule of Sarrus In matrix theory, the Rule of Sarrus is a mnemonic device for computing the determinant of a $3\times 3$ matrix named after the French mathematician Pierre Frédéric Sarrus.[1] Consider a $3\times 3$ matrix $M={\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}},$ then its determinant can be computed by the following scheme. Write out the first two columns of the matrix to the right of the third column, giving five columns in a row. Then add the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals going from bottom to top (dashed). This yields[1][2] ${\begin{aligned}\det(M)&=\det {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}}\\[6pt]&=a_{11}a_{22}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{31}a_{22}a_{13}-a_{32}a_{23}a_{11}-a_{33}a_{21}a_{12}.\end{aligned}}$ A similar scheme based on diagonals works for $2\times 2$ matrices:[1] $\det(M)=\det {\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}}=a_{11}a_{22}-a_{21}a_{12}.$ Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes for larger matrices. Sarrus' rule can also be derived using the Laplace expansion of a $3\times 3$ matrix.[1] Another way of thinking of Sarrus' rule is to imagine that the matrix is wrapped around a cylinder, such that the right and left edges are joined. References 1. Fischer, Gerd (1985). Analytische Geometrie (in German) (4th ed.). Wiesbaden: Vieweg. p. 145. ISBN 3-528-37235-4. 2. Paul Cohn: Elements of Linear Algebra. CRC Press, 1994, ISBN 9780412552809, p. 69 External links • Sarrus' rule at Planetmath • Linear Algebra: Rule of Sarrus of Determinants at khanacademy.org
Wikipedia
Sarti surface In algebraic geometry, a Sarti surface is a degree-12 nodal surface with 600 nodes, found by Alessandra Sarti (2008). The maximal possible number of nodes of a degree-12 surface is not known (as of 2015), though Yoichi Miyaoka showed that it is at most 645. Sarti has also found sextic, octic and dodectic nodal surfaces with high numbers of nodes and high degrees of symmetry. • Sextic with 48 node • Sextic with 48 node • Octic with 72 nodes • Octic with 144 nodes • Dodectic surface with 360 nodes • 3D model of Sarti surface See also • Nodal surface References • Sarti, Alessandra (1 December 2001). "Pencils of Symmetric Surfaces in P3". Journal of Algebra. 246 (1): 429–452. arXiv:math/0106080. doi:10.1006/jabr.2001.8953. ISSN 0021-8693. S2CID 17214934. • Sarti, Alessandra (2008), "Symmetrische Flächen mit gewöhnlichen Doppelpunkten", Mathematische Semesterberichte, 55 (1): 1–5, doi:10.1007/s00591-007-0030-2, ISSN 0720-728X, MR 2379658, S2CID 122576773 • Miyaoka, Yoichi (1984), "The maximal number of quotient singularities on surfaces with given numerical invariants", Mathematische Annalen, 268 (2): 159–171, doi:10.1007/bf01456083, MR 0744605, S2CID 121817163 External links • "Sarti surfaces". • Weisstein, Eric W. "Sarti Dodecic". MathWorld.
Wikipedia
Sasakian manifold In differential geometry, a Sasakian manifold (named after Shigeo Sasaki) is a contact manifold $(M,\theta )$ equipped with a special kind of Riemannian metric $g$, called a Sasakian metric. Definition A Sasakian metric is defined using the construction of the Riemannian cone. Given a Riemannian manifold $(M,g)$, its Riemannian cone is the product $(M\times {\mathbb {R} }^{>0})\,$ of $M$ with a half-line ${\mathbb {R} }^{>0}$, equipped with the cone metric $t^{2}g+dt^{2},\,$ where $t$ is the parameter in ${\mathbb {R} }^{>0}$. A manifold $M$ equipped with a 1-form $\theta $ is contact if and only if the 2-form $t^{2}\,d\theta +2t\,dt\cdot \theta \,$ on its cone is symplectic (this is one of the possible definitions of a contact structure). A contact Riemannian manifold is Sasakian, if its Riemannian cone with the cone metric is a Kähler manifold with Kähler form $t^{2}\,d\theta +2t\,dt\cdot \theta .$ Examples As an example consider $S^{2n-1}\hookrightarrow {\mathbb {R} }^{2n,*}={\mathbb {C} }^{n,*}$ where the right hand side is a natural Kähler manifold and read as the cone over the sphere (endowed with embedded metric). The contact 1-form on $S^{2n-1}$ is the form associated to the tangent vector $i{\vec {N}}$, constructed from the unit-normal vector ${\vec {N}}$ to the sphere ($i$ being the complex structure on ${\mathbb {C} }^{n}$). Another non-compact example is ${{\mathbb {R} }^{2n+1}}$ with coordinates $({\vec {x}},{\vec {y}},z)$ endowed with contact-form $\theta ={\frac {1}{2}}dz+\sum _{i}y_{i}\,dx_{i}$ and the Riemannian metric $g=\sum _{i}(dx_{i})^{2}+(dy_{i})^{2}+\theta ^{2}.$ As a third example consider: ${\mathbb {P} }^{2n-1}{\mathbb {R} }\hookrightarrow {\mathbb {C} }^{n,*}/{\mathbb {Z} }_{2}$ where the right hand side has a natural Kähler structure, and the group ${\mathbb {Z} }_{2}$ acts by reflection at the origin. History Sasakian manifolds were introduced in 1960 by the Japanese geometer Shigeo Sasaki.[1] There was not much activity in this field after the mid-1970s, until the advent of String theory. Since then Sasakian manifolds have gained prominence in physics and algebraic geometry, mostly due to a string of papers by Charles P. Boyer and Krzysztof Galicki and their co-authors. The Reeb vector field The homothetic vector field on the cone over a Sasakian manifold is defined to be $t\partial /\partial t.$ As the cone is by definition Kähler, there exists a complex structure J. The Reeb vector field on the Sasaskian manifold is defined to be $\xi =-J(t\partial /\partial t).$ It is nowhere vanishing. It commutes with all holomorphic Killing vectors on the cone and in particular with all isometries of the Sasakian manifold. If the orbits of the vector field close then the space of orbits is a Kähler orbifold. The Reeb vector field at the Sasakian manifold at unit radius is a unit vector field and tangential to the embedding. Sasaki–Einstein manifolds A Sasakian manifold $M$ is a manifold whose Riemannian cone is Kähler. If, in addition, this cone is Ricci-flat, $M$ is called Sasaki–Einstein; if it is hyperkähler, $M$ is called 3-Sasakian. Any 3-Sasakian manifold is both an Einstein manifold and a spin manifold. If M is positive-scalar-curvature Kahler–Einstein manifold, then, by an observation of Shoshichi Kobayashi, the circle bundle S in its canonical line bundle admits a Sasaki–Einstein metric, in a manner that makes the projection from S to M into a Riemannian submersion. (For example, it follows that there exist Sasaki–Einstein metrics on suitable circle bundles over the 3rd through 8th del Pezzo surfaces.) While this Riemannian submersion construction provides a correct local picture of any Sasaki–Einstein manifold, the global structure of such manifolds can be more complicated. For example, one can more generally construct Sasaki–Einstein manifolds by starting from a Kahler–Einstein orbifold M. Using this observation, Boyer, Galicki, and János Kollár constructed infinitely many homeotypes of Sasaki-Einstein 5-manifolds. The same construction shows that the moduli space of Einstein metrics on the 5-sphere has at least several hundred connected components. Notes 1. "Sasaki biography". References • Shigeo Sasaki, "On differentiable manifolds with certain structures which are closely related to almost contact structure", Tohoku Math. J. 2 (1960), 459-476. • Charles P. Boyer, Krzysztof Galicki, Sasakian geometry • Charles P. Boyer, Krzysztof Galicki, "3-Sasakian Manifolds", Surveys Diff. Geom. 7 (1999) 123-184 • Dario Martelli, James Sparks and Shing-Tung Yau, "Sasaki-Einstein Manifolds and Volume Minimization", ArXiv hep-th/0603021 External links • EoM page, Sasakian manifold Authority control International • FAST National • France • BnF data • Israel • United States Other • IdRef
Wikipedia
Satake isomorphism In mathematics, the Satake isomorphism, introduced by Ichirō Satake (1963), identifies the Hecke algebra of a reductive group over a local field with a ring of invariants of the Weyl group. The geometric Satake equivalence is a geometric version of the Satake isomorphism, proved by Ivan Mirković and Kari Vilonen (2007). Statement Classical Satake isomorphism. Let $G$ be a semisimple algebraic group, $K$ be a non-Archimedean local field and $O$ be its ring of integers. It's easy to see that $Gr=G(K)/G(O)$ is grassmannian. For simplicity, we can think that $K=\mathbb {Z} /p\mathbb {Z} ((x))$ and $O=\mathbb {Z} /p\mathbb {Z} [[x]]$, $p$ a prime number; in this case, $Gr$ is a infinite dimensional algebraic variety (Ginzburg 2000). One denotes the category of all compactly supported spherical functions on $G(K)$ biinvariant under the action of $G(O)$ as $\mathbb {C} _{c}[G(O)\backslash G(K)/G(O)]$, $\mathbb {C} $ the field of complex numbers, which is a Hecke algebra and can be also treated as a group scheme over $\mathbb {C} $. Let $T(\mathbb {C} )$ be the maximal torus of $G(\mathbb {C} )$, $W$ be the Weyl group of $G$. one can associate a cocharacter variety $\mathbb {X} _{*}(T(\mathbb {C} ))$ to $T(\mathbb {C} )$. Let $X_{*}(T(\mathbb {C} ))$ be the set of all cocharacters of $T(\mathbb {C} )$, i.e. $X_{*}(T(\mathbb {C} ))=\mathrm {Hom} (\mathbb {C} ^{*},T(\mathbb {C} ))$. The cocharacter variety $\mathbb {X} _{*}(T(\mathbb {C} ))$ is basically the group scheme created by adding the elements of $X_{*}(T(\mathbb {C} ))$ as variables to $\mathbb {C} $, i.e. $\mathbb {X} _{*}(T(\mathbb {C} ))=\mathbb {C} [X_{*}(T(\mathbb {C} ))]$. There is a natural action of $W$ on the cocharacter variety $\mathbb {X} _{*}(T(\mathbb {C} ))$, induced by the natural action of $W$ on $T$. Then the Satake isomorphism is a algebra isomorphism from the category of spherical functions to the $W$-invariant part of the aforementioned cocharacter variety. In formulas: $\mathbb {C} _{c}[G(O)\backslash G(K)/G(O)]\quad \xrightarrow {\sim } \quad \mathbb {X} _{*}(T(\mathbb {C} ))^{W}$. Geometric Satake isomorphism. As Ginzburg said (Ginzburg 2000), "geometric" stands for sheaf theoretic. In order to obtain the geometric version of Satake isomorphism, one has to change the left part of the isomorphism, using Grothendieck group of the category of perverse sheaves on $Gr$ to replace the category of spherical functions; the replacement is de facto an algebra isomorphism over $\mathbb {C} $ (Ginzburg 2000). One has also to replace the right hand side of the isomorphism by the Grothendieck group of finite dimensional complex representations of the Langlands dual ${}^{L}G$ of $G$; the replacement is also an algebra isomorphism over $\mathbb {C} $ (Ginzburg 2000). Let $\mathrm {Perv} (Gr)$ denote the category of perverse sheaf on $Gr$. Then, the geometric Satake isomorphism is $K(\mathrm {Perv} (Gr))\otimes _{\mathbb {Z} }\mathbb {C} \quad \xrightarrow {\sim } \quad K(\mathrm {Rep} ({}^{L}G))\otimes _{\mathbb {Z} }\mathbb {C} $, where the $K$ in $K(\mathrm {Rep} ({}^{L}G))$ stands for the Grothendieck group. This can be obviously simplified to $\mathrm {Perv} (Gr)\quad \xrightarrow {\sim } \quad \mathrm {Rep} ({}^{L}G)$, which is a fortiori an equivalence of Tannakian categories (Ginzburg 2000). Notes References • Gross, Benedict H. (1998), "On the Satake isomorphism", Galois representations in arithmetic algebraic geometry (Durham, 1996), London Math. Soc. Lecture Note Ser., vol. 254, Cambridge University Press, pp. 223–237, doi:10.1017/CBO9780511662010.006, ISBN 9780521644198, MR 1696481 • Mirković, Ivan; Vilonen, Kari (2007), "Geometric Langlands duality and representations of algebraic groups over commutative rings", Annals of Mathematics, Second Series, 166 (1): 95–143, arXiv:math/0401222, doi:10.4007/annals.2007.166.95, ISSN 0003-486X, MR 2342692, S2CID 14127684 • Satake, Ichirō (1963), "Theory of spherical functions on reductive algebraic groups over p-adic fields", Publications Mathématiques de l'IHÉS, 18 (18): 5–69, doi:10.1007/BF02684781, ISSN 1618-1913, MR 0195863, S2CID 4666554 • Ginzburg, Victor (2000). "Perverse sheaves on a loop group and Langlands' duality". arXiv:alg-geom/9511007.
Wikipedia
Numan Yunusovich Satimov Numan Yunusovich Satimov (Russian: Нуман Юнусович Сатимов) (15 December 1939 – 22 September 2006) was a Soviet and Uzbek mathematician, Doktor Nauk in Physical and Mathematical Sciences, academician of the Academy of Sciences of Uzbekistan (2000), and corresponding member of the Academy of Sciences of UzSSR from 1979 to 2006, and a laureate of the Biruni State Prize (1985). He was a specialist in the theory of differential equations, control theory and their applications.[1][2][3] Numan Yunusovich Satimov Нуман Юнусович Сатимов Born(1939-12-15)15 December 1939 Andijan, Uzbek SSR Died22 September 2006(2006-09-22) (aged 66) Tashkent, Uzbekistan Citizenship Soviet Union →  Uzbekistan Alma materMSU named after M. V. Lomonosov Scientific career FieldsDifferential equation, Control theory Institutions National University of Uzbekistan named after Mirzo Ulugbek, The Uzbek Academy of Sciences' Romanovsky Institute of Mathematics Doctoral advisorE. F. Mischenko Other academic advisorsV. G. Boltyansky Biography Satimov was born on 15 December 1939[4] in the city of Andijan in a working-class family. In 1956, he was accepted to the Central Asian State University at the Faculty of Physics and Mathematics. In 1958, Satimov continued his studies at the Moscow State University named after M. V. Lomonosov at the Faculty of Mechanics and Mathematics. After graduating from the university in 1962, he entered graduate school at the Uzbek Academy of Sciences' Romanovsky Institute[5] of Mathematics, where he worked as a junior research fellow from 1965 to 1968.[6] In 1968, under the guidance of Professor V. G. Boltyansky, Satimov defended his thesis. In 1977, under the guidance of Professor E. F. Mischenko, he defended his doctoral dissertation (at the specialized council of Steklov Institute of Mathematics). In 1978, he was awarded the title of professor. In 1979, he became a corresponding member of the Academy of Sciences of the Uzbek SSR, in 2000 – academician of the Academy of Sciences of the Republic of Uzbekistan. Since 1968, Satimov worked in Tashkent State University. In 1971, he became the head of a department at the Faculty of Applied Mathematics and Mechanics of NUUz. From 1974 to 1976, Satimov worked as a senior research fellow at Steklov Institute of Mathematics. From 1985 to 1987 he served as the dean of the Faculty of Applied Mathematics and Mechanics. Since 2000, he was a leading researcher at the Uzbek Academy of Sciences' Romanovsky Institute of Mathematics.[7] Satimov died on 22 September 2006. He was buried at the Chagatai cemetery in Tashkent. Scientific interests Satimov's primary research interest included the theory of differential equations, control theory and their applications. He founded the Tashkent Scientific School on the theory of controls and differential games. He led the research seminar “Optimal processes and differential games” for over 35 years. Moreover, Satimov is the author of a textbook on differential equations and two monographs.[8][9] He published more than 180 scientific papers; most of which have been translated and published in US and UK journals. Under his guidance, eight doctoral and more than twenty master's theses were prepared.[10] Since 1970, Satimov worked on a new section of the theory of controlled processes – the theory of differential pursuit–evasion games. He paid particular attention to the development of L. S. Pontryagin's methods. As a result, Satimov proposed and later developed the so-called third (modified) method for solving the problem of persecution. Bibliography • Задача об уклонении от встреч в дифференциальных играх с нелинейными управлениями // Дифференц. уравнения, 1973 г., Т. 9, No. 10, С. 1792—1797 (совместно с Е. Ф. Мищенко).[11] • Н. Сатимов, “К задаче убегания в дифференциальных играх с нелинейными управлениями”, Докл. АН СССР, 216:4 (1974), 744–747.[12] • N. Satimov, On the pursuit problem relative to position in differential games, Dokl. Akad. Nauk SSSR, 1976, Volume 229, Number 4, 808–811.[13] • Н. Ю. Сатимов, А. З. Фазылов, А. А. Хамдамов, “О задачах преследования и уклонения в дифференциальных и дискретных играх многих лиц с интегральными ограничениями”, Дифференц. уравнения, 20:8 (1984), 1388–1396.[14] • Н. Сатимов "Избежание столкновений в линейных системах с интегральными ограничениями" // Сердика, Болгарска, 1989 г., т. 15 (совместно с , А. З. Фазыловым).[15] • N. Yu. Satimov, M. Tukhtasinov, Game problems on a fixed interval in controlled first-order evolution equations, Mathematical Notes, 2006, Volume 80, Issue 3–4, pp 587–589.[16] • N. Yu. Satimov, M. Tukhtasimov, Evasion in a certain class of distributed control systems, Mathematical Notes, May 2015, Volume 97, Issue 5–6, pp 764–773.[17] • К оценке некоторых областей целочисленных точек. В кн.: Математические методы распознавания образов: Доклады 10-й Всероссийской конференции, Москва, 2001, РАН Вычислительный центр при поддержке Российского Фонда Фундаментальных Исследований, С. 125—126 (совместно с Б. Б. Акбаралиевым)[18] Notes 1. "Satimov, N. Yu". zbMath. 2. Borodin, Alexey; Bugay, Arkadiy (1987). Выдающиеся математики [Outstanding mathematicians] (in Russian). Kyiv: Ряданьска Школа. p. 461. 3. "Ранее состоявшиеся члены Академии Наук РУз". Academy of Science of the Republic of Uzbekistan (in Russian). Retrieved October 1, 2019. 4. "Персоналии: Сатимов Нуман Юнусович" (in Russian). www.mathnet.ru. Retrieved October 1, 2019. 5. "Uzbekistan Academy of Sciences V. I. Romanovsky Institute of Mathematics". Retrieved October 3, 2019. 6. "Академик Н. Ю. Сатимов" [Academician N. Yu. Satimov]. Узбекский Математический Журнал (in Russian). 10: 108. 2006. 7. "Академик Н. Ю. Сатимов" [Academician N. Yu. Satimov]. Узбекский Математический Журнал (in Russian). 10: 108. 2006. 8. Satimov, Numan; Rikhsiyev, Badir (2000). Azamov, Abdulla (ed.). Методы решения задачи уклонения от встречи в математической теории управления [Methods for solving the avoidance problem in mathematical control theory] (in Russian). Tashkent: "ФАН" АН РУз. p. 176. ISBN 5-648-02716-8. 9. Satimov, Numan (1987). Управляемые динамические системы и их приложения [Managed Dynamic Systems and Their Applications] (in Russian). Tashkent: Tashkent State University. p. 104. 10. "Академик Н. Ю. Сатимов" [Academician N. Yu. Satimov]. Узбекский Математический Журнал (in Russian). 10: 108. 2006. 11. Satimov, Numan; Mishchenko, Evgeniy (1973). "Задача об уклонении от встречи в дифференциальных играх с нелинейными управлениями" [The avoidance problem in differential games with nonlinear controls] (PDF). Дифференциальные уравнения (in Russian). 9 (10): 1792–1797 – via MathNet. 12. Satimov, Numan (1974). "К задаче убегания в дифференциальных играх с нелинейными управлениями" [On the runaway problem in differential games with nonlinear controls] (PDF). Доклады Академии наук СССР (in Russian). 26 (4): 744–747 – via MathNet. 13. Satimov, Numan (1976). "On the pursuit problem relative to position in differential games" (PDF). Dokl. Akad. Nauk SSSR (in Russian). 229: 808–811 – via MathNet. 14. Satimov, Numan (1984). "О задачах преследования и уклонения в дифференциальных и дискретных играх многих лиц с интегральными ограничениями" [On the pursuit and evasion problems in differential and discrete games of many people with integral constraints] (PDF). Дифференц. уравнения (in Russian). 20: 1388–1396 – via MathNet. 15. Satimov, Numan (1989). "Избежание столкновений в линейных системах с интегральными ограничениями" [Collision avoidance in linear systems with integral constraints] (PDF). Serdica Bulgariacae Mathematicae Publicationes (in Russian). 15: 223–231 – via math.bas.bg. 16. Satimov, Numan; Tukhtasinov, Muminjon (2006). "Game problems on a fixed interval in controlled first-order evolution equations". Mathematical Notes. 80 (3–4): 578–589. doi:10.1007/s11006-006-0177-5. 17. Satimov, Numan; Tukhtasimov, Muminjon (2015). "Evasion in a certain class of distributed control systems". Mathematical Notes. 97 (5–6): 764–773. doi:10.1134/S0001434615050119. 18. Satimov, Numan (2001). "К оценке некоторых областей целочисленных точек" [To the estimation of some areas of integer points]. Математические методы распознавания образов: Доклады 10-й Всероссийской конференции [Mathematical Methods for Pattern Recognition: Reports of the 10th All-Russian Conference] (PDF) (in Russian). Moscow: АЛЕВ-В. p. 342.
Wikipedia
Satisfaction equilibrium In game theory, a satisfaction equilibrium is a solution concept for a class of non-cooperative games, namely games in satisfaction form. Games in satisfaction form model situations in which players aim at satisfying a given individual constraint, e.g., a performance metric must be smaller or bigger than a given threshold. When a player satisfies its own constraint, the player is said to be satisfied. A satisfaction equilibrium, if it exists, arises when all players in the game are satisfied. Satisfaction Equilibrium A solution concept in game theory Relationship Subset ofsolution concept Superset ofNon-cooperative game theory Significance Used forAll non-cooperative games History The term Satisfaction equilibrium (SE) was first used to refer to the stable point of a dynamic interaction between players that are learning an equilibrium by taking actions and observing their own payoffs. The equilibrium lies on the satisfaction principle, which stipulates that an agent that is satisfied with its current payoff does not change its current action. [1] Later, the notion of satisfaction equilibrium was introduced as a solution concept for Games in satisfaction form.[2] Such solution concept was introduced in the realm of electrical engineering for the analysis of quality of service (QoS) in Wireless ad hoc networks. In this context, radio devices (network components) are modelled as players that decide upon their own operating configurations in order to satisfy some targeted QoS. Games in satisfaction form and the notion of satisfaction equilibrium have been used in the context of the fifth generation of cellular communications (5G) for tackling the problem of energy efficiency, [3] spectrum sharing [4] and transmit power control. [5] [6] In the smart grid, games in satisfaction form have been used for modelling the problem of data injection attacks. [7] Games in Satisfaction Form In static games of complete, perfect information, a satisfaction-form representation of a game is a specification of the set of players, the players' action sets and their preferences. The preferences for a given player are determined by a mapping, often referred to as the preference mapping, from the Cartesian product of all the other players' action sets to the given player's power set of actions. That is, given the actions adopted by all the other players, the preference mapping determines the subset of actions with which the player is satisfied. Definition [Games in Satisfaction Form[2]] A game in satisfaction form is described by a tuple $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right),$ where, the set ${\mathcal {K}}=\lbrace 1,\ldots ,K\rbrace \subset \mathrm {N} $, with $0<K<+\infty $, represents the set of players; the set ${\mathcal {A}}_{k}$, with $k\in {\mathcal {K}}$ and $0<|{\mathcal {A}}_{k}|<+\infty $, represents the set of actions that player $k$ can play. The preference mapping $f_{k}:{\mathcal {A}}_{1}\times \ldots \times {\mathcal {A}}_{k-1}\times {\mathcal {A}}_{k+1}\times \ldots ,\times {\mathcal {A}}_{K}\rightarrow 2^{{\mathcal {A}}_{k}}$ determines the set of actions with which player $k$ is satisfied given the actions played by all the other players. The set $2^{{\mathcal {A}}_{k}}$ is the power set of ${\mathcal {A}}_{k}$. In contrast to other existing game formulations, e.g., normal form and normal form with constrained action sets,[8] the notion of performance optimization, i.e., utility maximization or cost minimization, is not present. Games in satisfaction-form model the case in which players adopt their actions aiming to satisfy a specific individual constraint given the actions adopted by all the other players. An important remark is that, players are assumed to be careless of whether other players can satisfy or not their individual constraints. Satisfaction Equilibrium An action profile is a tuple ${\boldsymbol {a}}=\left(a_{1},\ldots ,a_{K}\right)\in {\mathcal {A}}_{1}\times \ldots \times {\mathcal {A}}_{K}$. The action profile in which all players are satisfied is an equilibrium of the corresponding game in satisfaction form. At a satisfaction equilibrium, players do not exhibit a particular interest in changing its current action. Definition [Satisfaction Equilibrium in Pure Strategies[2]] The action profile ${\boldsymbol {a}}$ is a satisfaction equilibrium in pure strategies for the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right),$ if for all $k\in {\mathcal {K}}$, $a_{k}\in f_{k}\left(a_{1},\ldots ,a_{k-1},a_{k+1},\ldots ,a_{K}\right)$. Satisfaction Equilibrium in Mixed Strategies For all $k\in {\mathcal {K}}$, denote the set of all possible probability distributions over the set ${\mathcal {A}}_{k}=\lbrace A_{k,1},A_{k,2},\ldots ,A_{k,N_{k}}\rbrace $ by $\triangle \left({\mathcal {A}}_{k}\right)$, with $N_{k}=|{\mathcal {A}}_{k}|$. Denote by ${\boldsymbol {\pi }}_{k}=\left(\pi _{k,1},\pi _{k,2},\ldots ,\pi _{k,N_{k}}\right)$ the probability distribution (mixed strategy) adopted by player $k$ to choose its actions. For all $j\in \lbrace 1,\ldots ,N_{k}\rbrace $, $\pi _{k,j}$ represents the probability with which player $k$ chooses action $A_{k,j}\in {\mathcal {A}}_{k}$. The notation ${\boldsymbol {\pi }}_{-k}$ represents the mixed strategies of all players except that of player $k$. Definition [Extension to Mixed Strategies of the Satisfaction Form [2]] The extension in mixed strategies of the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$ is described by the tuple $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace {\bar {f}}_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$, where the correspondence ${\bar {f}}_{k}:\prod _{j\in {\mathcal {K}}\setminus \lbrace k\rbrace }\triangle \left({\mathcal {A}}_{j}\right)\rightarrow 2^{\triangle \left({\mathcal {A}}_{k}\right)}$ determines the set of all possible probability distributions that allow player $k$ to choose an action that satisfies its individual conditions with probability one, that is, ${\bar {f}}_{k}\left({\boldsymbol {\pi }}_{-k}\right)=\left\lbrace {\boldsymbol {\pi }}_{k}\in \triangle \left({\mathcal {A}}_{k}\right):\mathrm {Pr} \left(a_{k}\in f_{k}\left({\boldsymbol {a}}_{-k}\right)|a_{k}\sim {\boldsymbol {\pi }}_{k},{\boldsymbol {a}}_{-k}\sim {\boldsymbol {\pi }}_{-k}\right)=1\right\rbrace .$ A satisfaction equilibrium in mixed strategies is defined as follows. Definition [Satisfaction Equilibrium in Mixed Strategies[2]] The mixed strategy profile ${\boldsymbol {\pi }}^{*}\in \triangle \left({\mathcal {A}}_{1}\right)\times \ldots \times \triangle \left({\mathcal {A}}_{K}\right)$ is an SE in mixed strategies if for all $k\in {\mathcal {K}}$, ${\boldsymbol {\pi }}_{k}^{*}\in {\bar {f}}_{k}\left({\boldsymbol {\pi }}_{-k}^{*}\right)$. Let the $j$-th action of player $k$, i.e., $A_{k,j}$, be associated with the unitary vector ${\boldsymbol {e}}_{j}=\left(e_{1},e_{2}\ldots ,e_{N_{k}}\right)\in \mathrm {R} ^{N_{k}}$, where, all the components are zero except its $j$-th component, which is equal to one. The vector ${\boldsymbol {e}}_{j}$ represents a degenerated probability distribution, where the action $A_{k,j}$ is deterministically chosen. Using this argument, it becomes clear that every satisfaction equilibrium in pure strategies of the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$ is also a satisfaction equilibrium in mixed strategies of the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace {\bar {f}}_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$. At an SE of the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$, players choose their actions following a probability distribution such that only action profiles that allow all players to simultaneously satisfy their individual conditions with probability one are played with positive probability. Hence, in the case in which one SE in pure strategies does not exist, then, it does not exist a SE in mixed strategies in the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace {\bar {f}}_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$. ε-Satisfaction Equilibrium Under certain conditions, it is always possible to build mixed strategies that allow players to be satisfied with probability $1-\epsilon $, for some $\epsilon >0$. This observation leads to the definition of a solution concept known as $\epsilon $-satisfaction equilibrium ($\epsilon $-SE). Definition: [ε-Satisfaction Equilibrium[2]] Let $\epsilon $ satisfy $\epsilon \in \left]0,1\right]$. The mixed strategy profile ${\boldsymbol {\pi }}^{*}\in \triangle \left({\mathcal {A}}_{1}\right)\times \triangle \left({\mathcal {A}}_{2}\right)\times \ldots \times \triangle \left({\mathcal {A}}_{K}\right)$ is an epsilon-satisfaction equilibrium ($\epsilon $-SE) of the game $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace {\bar {f}}_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$, if for all $k\in {\mathcal {K}}$, it follows that ${\boldsymbol {\pi }}_{k}^{*}\in {\bar {\bar {f}}}_{k}\left({\boldsymbol {\pi }}_{-k}^{*}\right)$, where ${\bar {\bar {f}}}_{k}\left({\boldsymbol {\pi }}_{-k}^{*}\right)=\left\lbrace {\boldsymbol {\pi }}_{k}\in \triangle \left({\mathcal {A}}_{k}\right):\mathrm {Pr} \left(a_{k}\in f_{k}\left({\boldsymbol {a}}_{-k}\right)|a_{k}\sim {\boldsymbol {\pi }}_{k},{\boldsymbol {a}}_{-k}\sim {\boldsymbol {\pi }}_{-k}^{*}\right)\geqslant 1-\epsilon \right\rbrace .$ From the definition above, it can be implied that if the mixed strategy profile ${\boldsymbol {\pi }}^{*}$ is an $\epsilon $-SE, it holds that, $\mathrm {Pr} \left(a_{k}\in f_{k}\left({\boldsymbol {a}}_{-k}\right)|a_{k}\sim {\boldsymbol {\pi }}_{k}^{*},{\boldsymbol {a}}_{-k}\sim {\boldsymbol {\pi }}_{-k}^{*}\right)\geqslant 1-\epsilon .$ That is, players are unsatisfied with probability $\epsilon $. The relevance of the $\epsilon $-SE is that it models the fact that players can be tolerant a certain unsatisfaction level. At a given $\epsilon $-SE, none of the players is interested in changing its mixed strategy profile as long as it is satisfied with a probability higher than or equal to $1-\epsilon $, for some $\epsilon >0$. In contrast to the conditions for the existence of a SE in either pure or mixed strategies, the conditions for the existence of an $\epsilon $-SE are mild. Proposition [Existence of an $\epsilon $-SE[2]] Let $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$, be a finite game in satisfaction form. Then, if for all $k\in {\mathcal {K}}$, there always exists an action profile ${\boldsymbol {a}}\in {\mathcal {A}}$ such that $a_{k}\in f_{k}\left({\boldsymbol {a}}_{-k}\right)$, then there always exists a strategy profile ${\boldsymbol {\pi }}^{*}\in \triangle \left({\mathcal {A}}_{1}\right)\times \triangle \left({\mathcal {A}}_{2}\right)\times \ldots \times \triangle \left({\mathcal {A}}_{K}\right)$ and a real $\epsilon $, with $1>\epsilon >0$, such that, ${\boldsymbol {\pi }}^{\star }$ is an $\epsilon $-SE. Equilibrium Selection Games in satisfaction form might exhibit several satisfaction equilibria. In such a case, players might associate to each of their own actions a value representing the effort or cost to play such action. From this perspective, if several SEs exist, players might prefer the one that requires the lowest (global or individual) effort or cost. To model this preference, games in satisfaction form might be equipped with cost functions for each of the players. For all $k\in {\mathcal {K}}$, let the function $c_{k}:{\mathcal {A}}_{k}\rightarrow \left[0,1\right]$ determine the effort or cost paid by player $k$ for using each of its actions. More specifically, given a pair of actions $(a_{k},a_{k}')\in {\mathcal {A}}_{k}^{2}$, the action $a_{k}$ is preferred against $a_{k}'$ by player $k$ if $c_{k}\left(a_{k}\right)<c_{k}\left(a_{k}'\right),$ Note that this preference for player $k$ is independent of the actions adopted by all the other players. Definition: [Efficient Satisfaction Equilibrium (ESE)] Let ${\mathcal {S}}$ be the set of satisfaction equilibria in pure strategies of the game in satisfaction form $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace f_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$. The strategy profile ${\boldsymbol {a}}^{\star }=\left(a_{1}^{\star },a_{2}^{\star },\ldots ,a_{K}^{\star }\right)\in {\mathcal {A}}$ is an efficient satisfaction equilibrium if for all ${\boldsymbol {a}}\in {\mathcal {A}}$, it follows that $\sum _{k=1}^{K}c_{k}\left(a_{k}^{\star }\right)\leqslant \sum _{k=1}^{K}c_{k}\left(a_{k}\right)$. In the trivial case in which for all $k\in {\mathcal {K}}$ the function $c_{k}$ is a constant function, the set of ESE and the set of SE are identical. This highlights the relevance of the ability of players to differentiate the effort of playing one action or another in order to select one (satisfaction) equilibrium among all the existing equilibria. In games in satisfaction form with nonempty sets of satisfaction equilibria, when all players assign different costs to its actions, i.e., for all $k\in {\mathcal {K}}$ and for all $(a,a')\in {\mathcal {A}}_{k}\times {\mathcal {A}}_{k}$, it holds that $c_{k}(a)\neq c_{k}(a')$, there always exists an ESE. Nonetheless, it is not necessarily unique, which implies that there still exists room for other equilibrium refinements beyond the notion of individual cost functions. [5] [6] Generalizations Games in satisfaction form for which it does not exists an action profile in which all players are satisfied are said not to possess a satisfaction equilibrium. In this case, an action profile induces a partition of the set ${\mathcal {K}}$ formed by the sets ${\mathcal {K}}_{\mathrm {s} }$ and ${\mathcal {K}}_{\mathrm {u} }$. On one hand, the players in ${\mathcal {K}}_{\mathrm {s} }$ are satisfied. On the other hand, players in ${\mathcal {K}}_{\mathrm {u} }$ are unsatisfied. If players in the set ${\mathcal {K}}_{\mathrm {u} }$ cannot be satisfied by any of its actions given the actions of all the other players, these players are not interested in changing its current action. This implies that action profiles that satisfy this condition are also equilibria. This is because none of the players is particularly interested in changing their current actions, even those that are unsatisfied. This reasoning led to another solution concept known as generalized satisfaction equilibrium (GSE). This generalization is proposed in the context of a novel game formulation, namely the generalized satisfaction form. [9] Definition: [Generalized Satisfaction Form] A game in generalized satisfaction form is described by a tuple $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace g_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$, where, the set ${\mathcal {K}}=\lbrace 1,\ldots ,K\rbrace \subset \mathrm {N} $, with $0<K<+\infty $, represents the set of players; the set ${\mathcal {A}}_{k}$, with $k\in {\mathcal {K}}$ and $0<|{\mathcal {A}}_{k}|<+\infty $, represents the set of actions that player $k$ can play; and the preference mapping $g_{k}:\prod _{j\in {\mathcal {K}}\setminus \lbrace k\rbrace }\triangle \left({\mathcal {A}}_{j}\right)\rightarrow 2^{\triangle \left({\mathcal {A}}_{k}\right)}$, determines the set of probability mass functions (mixed strategies) with support ${\mathcal {A}}_{k}$ that satisfy player $k$ given the mixed strategies adopted by all the other players. The generalized satisfaction equilibrium is defined as follows. Definition: [Generalized Satisfaction Equilibrium (GSE)[9]] The mixed strategy profile ${\boldsymbol {\pi }}^{*}\in \triangle \left({\mathcal {A}}_{1}\right)\times \ldots \times \triangle \left({\mathcal {A}}_{K}\right)$ is a generalized satisfaction equilibrium of the game in generalized satisfaction form $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace g_{k}\right\rbrace _{k\in {\mathcal {K}}}\right)$ if there exists a partition of the set ${\mathcal {K}}$ formed by the sets ${\mathcal {K}}_{\mathrm {s} }$ and ${\mathcal {K}}_{\mathrm {u} }$ and the following holds: (i) For all $k\in {\mathcal {K}}_{\mathrm {s} }$, ${\boldsymbol {\pi }}_{k}\in g_{k}\left({\boldsymbol {\pi }}_{-k}\right)$; and (ii)For all $k\in {\mathcal {K}}_{\mathrm {u} }$, $g_{k}\left({\boldsymbol {\pi }}_{-k}\right)=\emptyset .$ Note that the GSE boils down to the notion of $\epsilon $-SE of the game in satisfaction form $\left({\mathcal {K}},\left\lbrace {\mathcal {A}}_{k}\right\rbrace _{k\in {\mathcal {K}}},\left\lbrace {\bar {f}}_{k}\right\rbrace _{k\in {\mathcal {K}}}\right),$ when, ${\mathcal {K}}_{\mathrm {u} }=\emptyset $ and for all $k\in {\mathcal {K}}$, the correspondence $g_{k}$ is chosen to be $g({\boldsymbol {a}}_{-k})={\bar {\bar {f}}}_{k}\left({\boldsymbol {\pi }}_{-k}^{*}\right),$ with $\epsilon >0$. Similarly, the GSE boils down to the notion of SE in mixed strategies when $\epsilon =0$ and ${\mathcal {K}}_{\mathrm {u} }=\emptyset $. Finally, note that any SE is a GSE, but the converse is not true. References 1. Ross, S.; Chaib-draa, B. (May 2006). "Satisfaction Equilibrium: Achieving Cooperation in Incomplete Information Games". Proceedings of the Canadian Conference on Artificial Intelligence. Ottawa, ON, Canada. doi:10.1007/11766247_6. 2. Perlaza, S.; Tembine, H.; Lasaulce, S.; Debbah, M. (April 2012). "Quality-Of-Service Provisioning in Decentralized Networks: A Satisfaction Equilibrium Approach". IEEE Journal of Selected Topics in Signal Processing. 6 (2): 104–116. arXiv:1112.1730. Bibcode:2012ISTSP...6..104P. doi:10.1109/JSTSP.2011.2180507. S2CID 9567688. 3. Elhammouti, H.; Sabir, E.; Benjillali, M.; Echabbi, L.; Tembine, H. (September 2017). "Self-Organized Connected Objects: Rethinking QoS Provisioning for IoT Services". IEEE Communications Magazine. 55 (9): 41–47. doi:10.1109/MCOM.2017.1600614. S2CID 27329276. 4. Southwell, R.; Chen, X.; Huang, J. (March 2014). "Quality of Service Games for Spectrum Sharing". IEEE Journal on Selected Areas in Communications. 32 (3): 589–600. arXiv:1310.2354. doi:10.1109/JSAC.2014.1403008. S2CID 9227076. 5. Promponas, P.; Tsiropoulou, E-E.; Papavassiliou, S. (May 2021). "Rethinking Power Control in Wireless Networks: The Perspective of Satisfaction Equilibrium". IEEE Transactions on Control of Network Systems. 8 (4): 1680–1691. doi:10.1109/TCNS.2021.3078123. S2CID 236728675. 6. Promponas, P.; Pelekis, C.; Tsiropoulou, E-E.; Papavassiliou, S. (July 2021). "Games in Normal and Satisfaction Form for Efficient Transmission Power Allocation Under Dual 5G Wireless Multiple Access Paradigm". IEEE/ACM Transactions on Networking. 29 (6): 2574–2587. doi:10.1109/TNET.2021.3095351. S2CID 237965568. 7. Sanjab, A.; Saad, W. (July 2016). "Data Injection Attacks on Smart Grids With Multiple Adversaries: A Game-Theoretic Perspective". IEEE Transactions on Smart Grid. 7 (4): 2038–2049. arXiv:1604.00118. doi:10.1109/TSG.2016.2550218. S2CID 14309194. 8. Debreu, G. (October 1952). "A Social Equilibrium Existence Theorem" (PDF). Proceedings of the National Academy of Sciences of the United States of America. 38 (10): 886–893. Bibcode:1952PNAS...38..886D. doi:10.1073/pnas.38.10.886. PMC 1063675. PMID 16589195. 9. Goonewardena, M.; Perlaza, S.; Yadav, A.; Ajib, W. (June 2017). "Generalized Satisfaction Equilibrium for Service-Level Provisioning in Wireless Networks". IEEE Transactions on Communications. 65 (6): 2427–2437. doi:10.1109/TCOMM.2017.2662701. S2CID 25391577. Topics in game theory Definitions • Congestion game • Cooperative game • Determinacy • Escalation of commitment • Extensive-form game • First-player and second-player win • Game complexity • Graphical game • Hierarchy of beliefs • Information set • Normal-form game • Preference • Sequential game • Simultaneous game • Simultaneous action selection • Solved game • Succinct game Equilibrium concepts • Bayesian Nash equilibrium • Berge equilibrium • Core • Correlated equilibrium • Epsilon-equilibrium • Evolutionarily stable strategy • Gibbs equilibrium • Mertens-stable equilibrium • Markov perfect equilibrium • Nash equilibrium • Pareto efficiency • Perfect Bayesian equilibrium • Proper equilibrium • Quantal response equilibrium • Quasi-perfect equilibrium • Risk dominance • Satisfaction equilibrium • Self-confirming equilibrium • Sequential equilibrium • Shapley value • Strong Nash equilibrium • Subgame perfection • Trembling hand Strategies • Backward induction • Bid shading • Collusion • Forward induction • Grim trigger • Markov strategy • Dominant strategies • Pure strategy • Mixed strategy • Strategy-stealing argument • Tit for tat Classes of games • Bargaining problem • Cheap talk • Global game • Intransitive game • Mean-field game • Mechanism design • n-player game • Perfect information • Large Poisson game • Potential game • Repeated game • Screening game • Signaling game • Stackelberg competition • Strictly determined game • Stochastic game • Symmetric game • Zero-sum game Games • Go • Chess • Infinite chess • Checkers • Tic-tac-toe • Prisoner's dilemma • Gift-exchange game • Optional prisoner's dilemma • Traveler's dilemma • Coordination game • Chicken • Centipede game • Lewis signaling game • Volunteer's dilemma • Dollar auction • Battle of the sexes • Stag hunt • Matching pennies • Ultimatum game • Rock paper scissors • Pirate game • Dictator game • Public goods game • Blotto game • War of attrition • El Farol Bar problem • Fair division • Fair cake-cutting • Cournot game • Deadlock • Diner's dilemma • Guess 2/3 of the average • Kuhn poker • Nash bargaining game • Induction puzzles • Trust game • Princess and monster game • Rendezvous problem Theorems • Arrow's impossibility theorem • Aumann's agreement theorem • Folk theorem • Minimax theorem • Nash's theorem • Negamax theorem • Purification theorem • Revelation principle • Sprague–Grundy theorem • Zermelo's theorem Key figures • Albert W. Tucker • Amos Tversky • Antoine Augustin Cournot • Ariel Rubinstein • Claude Shannon • Daniel Kahneman • David K. Levine • David M. Kreps • Donald B. Gillies • Drew Fudenberg • Eric Maskin • Harold W. Kuhn • Herbert Simon • Hervé Moulin • John Conway • Jean Tirole • Jean-François Mertens • Jennifer Tour Chayes • John Harsanyi • John Maynard Smith • John Nash • John von Neumann • Kenneth Arrow • Kenneth Binmore • Leonid Hurwicz • Lloyd Shapley • Melvin Dresher • Merrill M. Flood • Olga Bondareva • Oskar Morgenstern • Paul Milgrom • Peyton Young • Reinhard Selten • Robert Axelrod • Robert Aumann • Robert B. Wilson • Roger Myerson • Samuel Bowles • Suzanne Scotchmer • Thomas Schelling • William Vickrey Miscellaneous • All-pay auction • Alpha–beta pruning • Bertrand paradox • Bounded rationality • Combinatorial game theory • Confrontation analysis • Coopetition • Evolutionary game theory • First-move advantage in chess • Game Description Language • Game mechanics • Glossary of game theory • List of game theorists • List of games in game theory • No-win situation • Solving chess • Topological game • Tragedy of the commons • Tyranny of small decisions
Wikipedia
Satisfiability In mathematical logic, a formula is satisfiable if it is true under some assignment of values to its variables. For example, the formula $x+3=y$ is satisfiable because it is true when $x=3$ and $y=6$, while the formula $x+1=x$ is not satisfiable over the integers. The dual concept to satisfiability is validity; a formula is valid if every assignment of values to its variables makes the formula true. For example, $x+3=3+x$ is valid over the integers, but $x+3=y$ is not. Formally, satisfiability is studied with respect to a fixed logic defining the syntax of allowed symbols, such as first-order logic, second-order logic or propositional logic. Rather than being syntactic, however, satisfiability is a semantic property because it relates to the meaning of the symbols, for example, the meaning of $+$ in a formula such as $x+1=x$. Formally, we define an interpretation (or model) to be an assignment of values to the variables and an assignment of meaning to all other non-logical symbols, and a formula is said to be satisfiable if there is some interpretation which makes it true.[1] While this allows non-standard interpretations of symbols such as $+$, one can restrict their meaning by providing additional axioms. The satisfiability modulo theories problem considers satisfiability of a formula with respect to a formal theory, which is a (finite or infinite) set of axioms. Satisfiability and validity are defined for a single formula, but can be generalized to an arbitrary theory or set of formulas: a theory is satisfiable if at least one interpretation makes every formula in the theory true, and valid if every formula is true in every interpretation. For example, theories of arithmetic such as Peano arithmetic are satisfiable because they are true in the natural numbers. This concept is closely related to the consistency of a theory, and in fact is equivalent to consistency for first-order logic, a result known as Gödel's completeness theorem. The negation of satisfiability is unsatisfiability, and the negation of validity is invalidity. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition. The problem of determining whether a formula in propositional logic is satisfiable is decidable, and is known as the Boolean satisfiability problem, or SAT. In general, the problem of determining whether a sentence of first-order logic is satisfiable is not decidable. In universal algebra, equational theory, and automated theorem proving, the methods of term rewriting, congruence closure and unification are used to attempt to decide satisfiability. Whether a particular theory is decidable or not depends whether the theory is variable-free and on other conditions.[2] Reduction of validity to satisfiability For classical logics with negation, it is generally possible to re-express the question of the validity of a formula to one involving satisfiability, because of the relationships between the concepts expressed in the above square of opposition. In particular φ is valid if and only if ¬φ is unsatisfiable, which is to say it is false that ¬φ is satisfiable. Put another way, φ is satisfiable if and only if ¬φ is invalid. For logics without negation, such as the positive propositional calculus, the questions of validity and satisfiability may be unrelated. In the case of the positive propositional calculus, the satisfiability problem is trivial, as every formula is satisfiable, while the validity problem is co-NP complete. Propositional satisfiability for classical logic Main article: Propositional satisfiability In the case of classical propositional logic, satisfiability is decidable for propositional formulae. In particular, satisfiability is an NP-complete problem, and is one of the most intensively studied problems in computational complexity theory. Satisfiability in first-order logic For first-order logic (FOL), satisfiability is undecidable. More specifically, it is a co-RE-complete problem and therefore not semidecidable.[3] This fact has to do with the undecidability of the validity problem for FOL. The question of the status of the validity problem was posed firstly by David Hilbert, as the so-called Entscheidungsproblem. The universal validity of a formula is a semi-decidable problem by Gödel's completeness theorem. If satisfiability were also a semi-decidable problem, then the problem of the existence of counter-models would be too (a formula has counter-models iff its negation is satisfiable). So the problem of logical validity would be decidable, which contradicts the Church–Turing theorem, a result stating the negative answer for the Entscheidungsproblem. Satisfiability in model theory In model theory, an atomic formula is satisfiable if there is a collection of elements of a structure that render the formula true.[4] If A is a structure, φ is a formula, and a is a collection of elements, taken from the structure, that satisfy φ, then it is commonly written that A ⊧ φ [a] If φ has no free variables, that is, if φ is an atomic sentence, and it is satisfied by A, then one writes A ⊧ φ In this case, one may also say that A is a model for φ, or that φ is true in A. If T is a collection of atomic sentences (a theory) satisfied by A, one writes A ⊧ T Finite satisfiability A problem related to satisfiability is that of finite satisfiability, which is the question of determining whether a formula admits a finite model that makes it true. For a logic that has the finite model property, the problems of satisfiability and finite satisfiability coincide, as a formula of that logic has a model if and only if it has a finite model. This question is important in the mathematical field of finite model theory. Finite satisfiability and satisfiability need not coincide in general. For instance, consider the first-order logic formula obtained as the conjunction of the following sentences, where $a_{0}$ and $a_{1}$ are constants: • $R(a_{0},a_{1})$ • $\forall xy(R(x,y)\rightarrow \exists zR(y,z))$ • $\forall xyz(R(y,x)\wedge R(z,x)\rightarrow y=z))$ • $\forall x\neg R(x,a_{0})$ The resulting formula has the infinite model $R(a_{0},a_{1}),R(a_{1},a_{2}),\ldots $, but it can be shown that it has no finite model (starting at the fact $R(a_{0},a_{1})$ and following the chain of $R$ atoms that must exist by the second axiom, the finiteness of a model would require the existence of a loop, which would violate the third and fourth axioms, whether it loops back on $a_{0}$ or on a different element). The computational complexity of deciding satisfiability for an input formula in a given logic may differ from that of deciding finite satisfiability; in fact, for some logics, only one of them is decidable. For classical first-order logic, finite satisfiability is recursively enumerable (in class RE) and undecidable by Trakhtenbrot's theorem applied to the negation of the formula. Numerical constraints Further information: Satisfiability modulo theories and Constraint satisfaction problem Numerical constraints often appear in the field of mathematical optimization, where one usually wants to maximize (or minimize) an objective function subject to some constraints. However, leaving aside the objective function, the basic issue of simply deciding whether the constraints are satisfiable can be challenging or undecidable in some settings. The following table summarizes the main cases. Constraintsover realsover integers LinearPTIME (see linear programming)NP-complete (see integer programming) Polynomialdecidable through e.g. Cylindrical algebraic decompositionundecidable (Hilbert's tenth problem) Table source: Bockmayr and Weispfenning.[5]: 754  For linear constraints, a fuller picture is provided by the following table. Constraints over:rationalsintegersnatural numbers Linear equationsPTIMEPTIMENP-complete Linear inequalitiesPTIMENP-completeNP-complete Table source: Bockmayr and Weispfenning.[5]: 755  See also • 2-satisfiability • Boolean satisfiability problem • Circuit satisfiability • Karp's 21 NP-complete problems • Validity • Constraint satisfaction Notes 1. Boolos, Burgess & Jeffrey 2007, p. 120: "A set of sentences [...] is satisfiable if some interpretation [makes it true].". 2. Franz Baader; Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press. pp. 58–92. ISBN 0-521-77920-0. 3. Baier, Christel (2012). "Chapter 1.3 Undecidability of FOL". Lecture Notes — Advanced Logics. Technische Universität Dresden — Institute for Technical Computer Science. pp. 28–32. Archived from the original (PDF) on 14 October 2020. Retrieved 21 July 2012. 4. Wilifrid Hodges (1997). A Shorter Model Theory. Cambridge University Press. p. 12. ISBN 0-521-58713-1. 5. Alexander Bockmayr; Volker Weispfenning (2001). "Solving Numerical Constraints". In John Alan Robinson; Andrei Voronkov (eds.). Handbook of Automated Reasoning Volume I. Elsevier and MIT Press. ISBN 0-444-82949-0. (Elsevier) (MIT Press). References • Boolos, George; Burgess, John; Jeffrey, Richard (2007). Computability and Logic (5th ed.). Cambridge University Press. Further reading • Daniel Kroening; Ofer Strichman (2008). Decision Procedures: An Algorithmic Point of View. Springer Science & Business Media. ISBN 978-3-540-74104-6. • A. Biere; M. Heule; H. van Maaren; T. Walsh, eds. (2009). Handbook of Satisfiability. IOS Press. ISBN 978-1-60750-376-7. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal Metalogic and metamathematics • Cantor's theorem • Entscheidungsproblem • Church–Turing thesis • Consistency • Effective method • Foundations of mathematics • of geometry • Gödel's completeness theorem • Gödel's incompleteness theorems • Soundness • Completeness • Decidability • Interpretation • Löwenheim–Skolem theorem • Metatheorem • Satisfiability • Independence • Type–token distinction • Use–mention distinction
Wikipedia
Mikio Sato Mikio Sato (Japanese: 佐藤 幹夫, Hepburn: Satō Mikio, 18 April 1928 – 9 January 2023) was a Japanese mathematician known for founding the fields of algebraic analysis, hyperfunctions, and holonomic quantum fields. He was a professor at the Research Institute for Mathematical Sciences in Kyoto. Mikio Sato Born(1928-04-18)18 April 1928 Tokyo, Empire of Japan Died9 January 2023(2023-01-09) (aged 94) Kyoto, Japan[1] Alma materUniversity of Tokyo (BSc, 1952; PhD, 1963) Known for • Bernstein–Sato polynomials • Sato–Tate conjecture • Algebraic analysis • Holonomic quantum field • Hyperfunction • Prehomogeneous vector space Awards • Asahi Prize of Science (1969) • Japan Academy Prize (1976) • Person of Cultural Merits (1984) • Rolf Schock Prize in Mathematics (1997) • Wolf Prize (2003) Scientific career FieldsMathematics Institutions • Kyoto University • University of Tokyo • Osaka University ThesisTheory of hyperfunctions (1963) Doctoral advisorShokichi Iyanaga Doctoral students • Masaki Kashiwara • Takahiro Kawai Biography Born in Tokyo on 18 April 1928,[2] Sato studied at the University of Tokyo, receiving his BSc in 1952 and PhD under Shokichi Iyanaga in 1963.[3][4] He was a professor at Osaka University and the University of Tokyo before moving to the Research Institute for Mathematical Sciences (RIMS) attached to Kyoto University in 1970.[3] He was director of RIMS from 1987 to 1991.[3] His disciples include Masaki Kashiwara, Takahiro Kawai, Tetsuji Miwa, as well as Michio Jimbo, who have been called the "Sato School".[5] Sato died at home in Kyoto on 9 January 2023, aged 94.[6][1] Research Sato was known for his innovative work in a number of fields, such as prehomogeneous vector spaces and Bernstein–Sato polynomials; and particularly for his hyperfunction theory.[3] This theory initially appeared as an extension of the ideas of distribution theory; it was soon connected to the local cohomology theory of Grothendieck, for which it was an independent realisation in terms of sheaf theory. Further, it led to the theory of microfunctions and microlocal analysis in linear partial differential equations and Fourier theory, such as for wave fronts, and ultimately to the current developments in D-module theory.[2][7] Part of Sato's hyperfunction theory is the modern theory of holonomic systems: PDEs overdetermined to the point of having finite-dimensional spaces of solutions (algebraic analysis).[3] In theoretical physics, Sato wrote a series of papers in the 1970s with Michio Jimbo and Tetsuji Miwa that developed the theory of holonomic quantum fields.[2] When Sato was awarded the 2002–2003 Wolf Prize in Mathematics, this work was described as "a far-reaching extension of the mathematical formalism underlying the two-dimensional Ising model, and introduced along the way the famous tau functions."[2][3] Sato also contributed basic work to non-linear soliton theory, with the use of Grassmannians of infinite dimension.[3] In number theory, he and John Tate independently posed the Sato–Tate conjecture on L-functions around 1960.[8] Pierre Schapira remarked, "Looking back, 40 years later, we realize that Sato's approach to mathematics is not so different from that of Grothendieck, that Sato did have the incredible temerity to treat analysis as algebraic geometry and was also able to build the algebraic and geometric tools adapted to his problems."[9] Awards and honours Sato received the 1969 Asahi Prize of Science, the 1976 Japan Academy Prize, the 1984 Person of Cultural Merits award of the Japanese Education Ministry, the 1997 Schock Prize, and the 2002–2003 Wolf Prize in Mathematics.[3] Sato was a plenary speaker at the 1983 International Congress of Mathematicians in Warsaw.[3] He was elected a foreign member of the National Academy of Sciences in 1993.[3] Notes 1. "佐藤幹夫氏死去(京都大名誉教授)", 時事通信社, 18 January 2023 2. "Mikio Sato – Biography". MacTutor History of Mathematics archive. University of St Andrews. Retrieved 15 January 2023. 3. Jackson, Allyn (2003). "Sato and Tate Receive 2002–2003 Wolf Prize" (PDF). Notices of the American Mathematical Society. 50 (5): 569–570. 4. Mikio Sato at the Mathematics Genealogy Project 5. McCoy, Barry M. (24 March 2011). "Mikio Sato and Mathematical Physics". Publications of the Research Institute for Mathematical Sciences. 47 (1): 19–28. doi:10.2977/prims/30. ISSN 0034-5318. Retrieved 16 January 2023. 6. "The untimely passing of Professor Emeritus Sato Mikio". Retrieved 13 January 2023., Notice: Research Institute for Mathematical Sciences, Kyoto University (2023/01/13) 7. Kashiwara, Masaki; Kawai, Takahiro (2011). "Professor Mikio Sato and Microlocal Analysis". Publications of the Research Institute for Mathematical Sciences. 47 (1): 11–17. doi:10.2977/PRIMS/29 – via EMS-PH. 8. It is mentioned in J. Tate, Algebraic cycles and poles of zeta functions in the volume (O. F. G. Schilling, editor), Arithmetical Algebraic Geometry, pages 93–110 (1965). 9. Schapira, Pierre (February 2007). "Mikio Sato, a Visionary of Mathematics" (PDF). Notices of the American Mathematical Society. 54 (2): 243–245. Archived from the original (PDF) on 28 September 2020. Retrieved 16 January 2023. External links • Schock Prize citation • 1990 Interview in the AMS Notices • Mikio Sato, a Visionary of Mathematics by Pierre Schapira Rolf Schock Prize laureates Logic and philosophy • Willard Van Orman Quine (1993) • Michael Dummett (1995) • Dana Scott (1997) • John Rawls (1999) • Saul Kripke (2001) • Solomon Feferman (2003) • Jaakko Hintikka (2005) • Thomas Nagel (2008) • Hilary Putnam (2011) • Derek Parfit (2014) • Ruth Millikan (2017) • Saharon Shelah (2018) • Dag Prawitz / Per Martin-Löf (2020) • David Kaplan (2022) Mathematics • Elias M. Stein (1993) • Andrew Wiles (1995) • Mikio Sato (1997) • Yuri I. Manin (1999) • Elliott H. Lieb (2001) • Richard P. Stanley (2003) • Luis Caffarelli (2005) • Endre Szemerédi (2008) • Michael Aschbacher (2011) • Yitang Zhang (2014) • Richard Schoen (2017) • Ronald Coifman (2018) • Nikolai G. Makarov (2020) • Jonathan Pila (2022) Visual arts • Rafael Moneo (1993) • Claes Oldenburg (1995) • Torsten Andersson (1997) • Herzog & de Meuron (1999) • Giuseppe Penone (2001) • Susan Rothenberg (2003) • SANAA / Kazuyo Sejima + Ryue Nishizawa (2005) • Mona Hatoum (2008) • Marlene Dumas (2011) • Anne Lacaton / Jean-Philippe Vassal (2014) • Doris Salcedo (2017) • Andrea Branzi (2018) • Francis Alÿs (2020) • Rem Koolhaas (2022) Musical arts • Ingvar Lidholm (1993) • György Ligeti (1995) • Jorma Panula (1997) • Kronos Quartet (1999) • Kaija Saariaho (2001) • Anne Sofie von Otter (2003) • Mauricio Kagel (2005) • Gidon Kremer (2008) • Andrew Manze (2011) • Herbert Blomstedt (2014) • Wayne Shorter (2017) • Barbara Hannigan (2018) • György Kurtág (2020) • Víkingur Ólafsson (2022) Laureates of the Wolf Prize in Mathematics 1970s • Israel Gelfand / Carl L. Siegel (1978) • Jean Leray / André Weil (1979) 1980s • Henri Cartan / Andrey Kolmogorov (1980) • Lars Ahlfors / Oscar Zariski (1981) • Hassler Whitney / Mark Krein (1982) • Shiing-Shen Chern / Paul Erdős (1983/84) • Kunihiko Kodaira / Hans Lewy (1984/85) • Samuel Eilenberg / Atle Selberg (1986) • Kiyosi Itô / Peter Lax (1987) • Friedrich Hirzebruch / Lars Hörmander (1988) • Alberto Calderón / John Milnor (1989) 1990s • Ennio de Giorgi / Ilya Piatetski-Shapiro (1990) • Lennart Carleson / John G. Thompson (1992) • Mikhail Gromov / Jacques Tits (1993) • Jürgen Moser (1994/95) • Robert Langlands / Andrew Wiles (1995/96) • Joseph Keller / Yakov G. Sinai (1996/97) • László Lovász / Elias M. Stein (1999) 2000s • Raoul Bott / Jean-Pierre Serre (2000) • Vladimir Arnold / Saharon Shelah (2001) • Mikio Sato / John Tate (2002/03) • Grigory Margulis / Sergei Novikov (2005) • Stephen Smale / Hillel Furstenberg (2006/07) • Pierre Deligne / Phillip A. Griffiths / David B. Mumford (2008) 2010s • Dennis Sullivan / Shing-Tung Yau (2010) • Michael Aschbacher / Luis Caffarelli (2012) • George Mostow / Michael Artin (2013) • Peter Sarnak (2014) • James G. Arthur (2015) • Richard Schoen / Charles Fefferman (2017) • Alexander Beilinson / Vladimir Drinfeld (2018) • Jean-François Le Gall / Gregory Lawler (2019) 2020s • Simon K. Donaldson / Yakov Eliashberg (2020) • George Lusztig (2022) • Ingrid Daubechies (2023)  Mathematics portal Authority control International • FAST • ISNI • VIAF National • Norway • Germany • Israel • United States • Japan • Australia • Netherlands Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Sato–Tate conjecture In mathematics, the Sato–Tate conjecture is a statistical statement about the family of elliptic curves Ep obtained from an elliptic curve E over the rational numbers by reduction modulo almost all prime numbers p. Mikio Sato and John Tate independently posed the conjecture around 1960. Sato–Tate conjecture FieldArithmetic geometry Conjectured byMikio Sato John Tate Conjectured inc. 1960 First proof byLaurent Clozel Thomas Barnet-Lamb David Geraghty Michael Harris Nicholas Shepherd-Barron Richard Taylor First proof in2011 If Np denotes the number of points on the elliptic curve Ep defined over the finite field with p elements, the conjecture gives an answer to the distribution of the second-order term for Np. By Hasse's theorem on elliptic curves, $N_{p}/p=1+\mathrm {O} (1/\!{\sqrt {p}})\ $ as $p\to \infty $, and the point of the conjecture is to predict how the O-term varies. The original conjecture and its generalization to all totally real fields was proved by Laurent Clozel, Michael Harris, Nicholas Shepherd-Barron, and Richard Taylor under mild assumptions in 2008, and completed by Thomas Barnet-Lamb, David Geraghty, Harris, and Taylor in 2011. Several generalizations to other algebraic varieties and fields are open. Statement Let E be an elliptic curve defined over the rational numbers without complex multiplication. For a prime number p, define θp as the solution to the equation $p+1-N_{p}=2{\sqrt {p}}\cos \theta _{p}~~(0\leq \theta _{p}\leq \pi ).$ Then, for every two real numbers $\alpha $ and $\beta $ for which $0\leq \alpha <\beta \leq \pi ,$ $\lim _{N\to \infty }{\frac {\#\{p\leq N:\alpha \leq \theta _{p}\leq \beta \}}{\#\{p\leq N\}}}={\frac {2}{\pi }}\int _{\alpha }^{\beta }\sin ^{2}\theta \,d\theta ={\frac {1}{\pi }}\left(\beta -\alpha +\sin(\alpha )\cos(\alpha )-\sin(\beta )\cos(\beta )\right)$ Details By Hasse's theorem on elliptic curves, the ratio ${\frac {(p+1)-N_{p}}{2{\sqrt {p}}}}={\frac {a_{p}}{2{\sqrt {p}}}}$ is between -1 and 1. Thus it can be expressed as cos θ for an angle θ; in geometric terms there are two eigenvalues accounting for the remainder and with the denominator as given they are complex conjugate and of absolute value 1. The Sato–Tate conjecture, when E doesn't have complex multiplication,[1] states that the probability measure of θ is proportional to $\sin ^{2}\theta \,d\theta .$[2] This is due to Mikio Sato and John Tate (independently, and around 1960, published somewhat later).[3] Proof In 2008, Clozel, Harris, Shepherd-Barron, and Taylor published a proof of the Sato–Tate conjecture for elliptic curves over totally real fields satisfying a certain condition: of having multiplicative reduction at some prime,[4] in a series of three joint papers.[5][6][7] Further results are conditional on improved forms of the Arthur–Selberg trace formula. Harris has a conditional proof of a result for the product of two elliptic curves (not isogenous) following from such a hypothetical trace formula.[8] In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor proved a generalized version of the Sato–Tate conjecture for an arbitrary non-CM holomorphic modular form of weight greater than or equal to two,[9] by improving the potential modularity results of previous papers.[10] The prior issues involved with the trace formula were solved by Michael Harris,[11] and Sug Woo Shin.[12][13] In 2015, Richard Taylor was awarded the Breakthrough Prize in Mathematics "for numerous breakthrough results in (...) the Sato–Tate conjecture."[14] Generalisations There are generalisations, involving the distribution of Frobenius elements in Galois groups involved in the Galois representations on étale cohomology. In particular there is a conjectural theory for curves of genus n > 1. Under the random matrix model developed by Nick Katz and Peter Sarnak,[15] there is a conjectural correspondence between (unitarized) characteristic polynomials of Frobenius elements and conjugacy classes in the compact Lie group USp(2n) = Sp(n). The Haar measure on USp(2n) then gives the conjectured distribution, and the classical case is USp(2) = SU(2). Refinements There are also more refined statements. The Lang–Trotter conjecture (1976) of Serge Lang and Hale Trotter states the asymptotic number of primes p with a given value of ap,[16] the trace of Frobenius that appears in the formula. For the typical case (no complex multiplication, trace ≠ 0) their formula states that the number of p up to X is asymptotically $c{\sqrt {X}}/\log X\ $ with a specified constant c. Neal Koblitz (1988) provided detailed conjectures for the case of a prime number q of points on Ep, motivated by elliptic curve cryptography.[17] In 1999, Chantal David and Francesco Pappalardi proved an averaged version of the Lang–Trotter conjecture.[18][19] References 1. In the case of an elliptic curve with complex multiplication, the Hasse–Weil L-function is expressed in terms of a Hecke L-function (a result of Max Deuring). The known analytic results on these answer even more precise questions. 2. To normalise, put 2/π in front. 3. It is mentioned in J. Tate, Algebraic cycles and poles of zeta functions in the volume (O. F. G. Schilling, editor), Arithmetical Algebraic Geometry, pages 93–110 (1965). 4. That is, for some p where E has bad reduction (and at least for elliptic curves over the rational numbers there are some such p), the type in the singular fibre of the Néron model is multiplicative, rather than additive. In practice this is the typical case, so the condition can be thought of as mild. In more classical terms, the result applies where the j-invariant is not integral. 5. Taylor, Richard (2008). "Automorphy for some l-adic lifts of automorphic mod l Galois representations. II". Publ. Math. Inst. Hautes Études Sci. 108: 183–239. CiteSeerX 10.1.1.116.9791. doi:10.1007/s10240-008-0015-2. MR 2470688. 6. Clozel, Laurent; Harris, Michael; Taylor, Richard (2008). "Automorphy for some l-adic lifts of automorphic mod l Galois representations". Publ. Math. Inst. Hautes Études Sci. 108: 1–181. CiteSeerX 10.1.1.143.9755. doi:10.1007/s10240-008-0016-1. MR 2470687. 7. Harris, Michael; Shepherd-Barron, Nicholas; Taylor, Richard (2010), "A family of Calabi–Yau varieties and potential automorphy", Annals of Mathematics, 171 (2): 779–813, doi:10.4007/annals.2010.171.779, MR 2630056 8. See Carayol's Bourbaki seminar of 17 June 2007 for details. 9. Barnet-Lamb, Thomas; Geraghty, David; Harris, Michael; Taylor, Richard (2011). "A family of Calabi–Yau varieties and potential automorphy. II". Publ. Res. Inst. Math. Sci. 47 (1): 29–98. doi:10.2977/PRIMS/31. MR 2827723. 10. Theorem B of Barnet-Lamb et al. 2009 harvnb error: no target: CITEREFBarnet-LambGeraghtyHarrisTaylor2009 (help) 11. Harris, M. (2011). "An introduction to the stable trace formula". In Clozel, L.; Harris, M.; Labesse, J.-P.; Ngô, B. C. (eds.). The stable trace formula, Shimura varieties, and arithmetic applications. Vol. I: Stabilization of the trace formula. Boston: International Press. pp. 3–47. ISBN 978-1-57146-227-5. 12. Shin, Sug Woo (2011). "Galois representations arising from some compact Shimura varieties". Annals of Mathematics. 173 (3): 1645–1741. doi:10.4007/annals.2011.173.3.9. 13. See p. 71 and Corollary 8.9 of Barnet-Lamb et al. 2009 harvnb error: no target: CITEREFBarnet-LambGeraghtyHarrisTaylor2009 (help) 14. "Richard Taylor, Institute for Advanced Study: 2015 Breakthrough Prize in Mathematics". 15. Katz, Nicholas M. & Sarnak, Peter (1999), Random matrices, Frobenius Eigenvalues, and Monodromy, Providence, RI: American Mathematical Society, ISBN 978-0-8218-1017-0 16. Lang, Serge; Trotter, Hale F. (1976), Frobenius Distributions in GL2 extensions, Berlin: Springer-Verlag, ISBN 978-0-387-07550-1 17. Koblitz, Neal (1988), "Primality of the number of points on an elliptic curve over a finite field", Pacific Journal of Mathematics, 131 (1): 157–165, doi:10.2140/pjm.1988.131.157, MR 0917870. 18. "Concordia Mathematician Recognized for Research Excellence". Canadian Mathematical Society. 2013-04-15. Archived from the original on 2017-02-01. Retrieved 2018-01-15. 19. David, Chantal; Pappalardi, Francesco (1999-01-01). "Average Frobenius distributions of elliptic curves". International Mathematics Research Notices. 199 (4): 165–183. External links • Report on Barry Mazur giving context • Michael Harris notes, with statement (PDF) • La Conjecture de Sato–Tate [d'après Clozel, Harris, Shepherd-Barron, Taylor], Bourbaki seminar June 2007 by Henri Carayol (PDF) • Video introducing Elliptic curves and its relation to Sato-Tate conjecture, Imperial College London, 2014 (Last 15 minutes)
Wikipedia
Ruth Lyttle Satter Prize in Mathematics The Ruth Lyttle Satter Prize in Mathematics, also called the Satter Prize, is one of twenty-one prizes given out by the American Mathematical Society (AMS).[1] It is presented biennially in recognition of an outstanding contribution to mathematics research by a woman in the previous six years.[2] The award was funded in 1990 using a donation from Joan Birman, in memory of her sister, Ruth Lyttle Satter,[3] who worked primarily in biological sciences, and was a proponent for equal opportunities for women in science.[4] First awarded in 1991, the award is intended to "honor [Satter's] commitment to research and to encourage women in science".[5] The winner is selected by the council of the AMS, based on the recommendation of a selection committee.[5] The prize is awarded at the Joint Mathematics Meetings during odd numbered years, and has always carried a modest cash reward. Since 2003, the prize has been $5,000,[5][6] while from 1997 to 2001, the prize came with $1,200,[7][8] and prior to that it was $4,000.[9] If a joint award is made, the prize money is split between the recipients.[7] Ruth Lyttle Satter Prize in Mathematics Awarded foroutstanding contribution to mathematics research by a woman in the previous six years Presented byAmerican Mathematical Society Reward(s)$5,000 First awarded1991 Currently held byKaisa Matomäki (2021) Websitewww.ams.org/profession/prizes-awards/ams-prizes/satter-prize As of 2019, the award has been given 15 times, to 16 different individuals. Dusa McDuff was the first recipient of the award, for her work on symplectic geometry.[10] A joint award was made for the only time in 2001, when Karen E. Smith and Sijue Wu shared the award.[7] The 2013 prize winner was Maryam Mirzakhani, who, in 2014, was the first woman to be awarded the Fields Medal. This is considered to be the highest honor a mathematician can receive.[11][12] She won both awards for her work on "the geometry of Riemann surfaces and their moduli spaces".[13] The most recent winner is Kaisa Matomäki, who was awarded the prize in 2021 for her "work (much of it joint with Maksym Radziwiłł) opening up the field of multiplicative functions in short intervals in a completely unexpected and very fruitful way".[14] The Association for Women in Science have a similarly titled award, the Ruth Satter Memorial Award, which is a cash prize of $1,000 for "an outstanding graduate student who interrupted her education for at least 3 years to raise a family".[15][16] Recipients Satter Prize recipients and rationale[17] Year Image Recipient Rationale 1991 Dusa McDuff "for her outstanding work during the past five years on symplectic geometry" 1993 Lai-Sang Young "for her leading role in the investigation of the statistical (or ergodic) properties of dynamical systems" 1995 Sun-Yung Alice Chang "for her deep contributions to the study of partial differential equations on Riemannian manifolds and in particular for her work on extremal problems in spectral geometry and the compactness of isospectral metrics within a fixed conformal class on a compact 3-manifold" 1997 Ingrid Daubechies "for her deep and beautiful analysis of wavelets and their applications" 1999 Bernadette Perrin-Riou "for her number theoretical research on p-adic L-functions and Iwasawa theory" 2001 Karen E. Smith "for her outstanding work in commutative algebra" Sijue Wu "for her work on a long-standing problem in the water wave equation" 2003 Abigail Thompson "for her outstanding work in 3-dimensional topology" 2005 Svetlana Jitomirskaya "for her pioneering work on non-perturbative quasiperiodic localization, in particular for results in her papers (1) Metal–insulator transition for the almost Mathieu operator, Ann. of Math. (2) 150 (1999), no. 3, 1159–1175, and (2) with J. Bourgain, Absolutely continuous spectrum for 1D quasiperiodic operators, Invent. Math. 148 (2002), no. 3, 453–463" 2007 Claire Voisin "for her deep contributions to algebraic geometry, and in particular for her recent solutions to two long-standing open problems: the Kodaira problem (On the homotopy types of compact Kähler and complex projective manifolds, Inventiones Mathematicae, 157 (2004), no. 2, 329–343) and Green's conjecture (Green's canonical syzygy conjecture for generic curves of odd genus, Compositio Mathematica, 141 (2005), no. 5, 1163–1190; and Green's generic syzygy conjecture for curves of even genus lying on a K3 surface, Journal of the European Mathematical Society, 4 (2002), no. 4, 363–404)" 2009 Laure Saint-Raymond "for her fundamental work on the hydrodynamic limits of the Boltzmann equation in the kinetic theory of gases" 2011 Amie Wilkinson "for her remarkable contributions to the field of ergodic theory of partially hyperbolic dynamical systems" 2013 Maryam Mirzakhani "for her deep contributions to the theory of moduli spaces of Riemann surfaces" 2015 Hee Oh "for her fundamental contributions to the fields of dynamics on homogeneous spaces, discrete subgroups of Lie groups, and applications to number theory" 2017 Laura DeMarco "for her fundamental contributions to complex dynamics, potential theory, and the emerging field of arithmetic dynamics" 2019 Maryna Viazovska "for her groundbreaking work in discrete geometry and her spectacular solution to the sphere-packing problem in dimension eight." 2021 Kaisa Matomäki "for her work (much of it joint with Maksym Radziwiłł) opening up the field of multiplicative functions in short intervals in a completely unexpected and very fruitful way..." 2023 Panagiota Daskalopoulos "for groundbreaking work in the study of ancient solutions to geometric evolution equations" Nataša Šešum See also • List of mathematics awards References 1. "Prizes and Awards". American Mathematical Society. Retrieved September 14, 2017. 2. "Ruth Lyttle Satter Prize in Mathematics". American Mathematical Society. Retrieved September 13, 2017. 3. Case, Bettye; Leggett, Anne, eds. (2005). Complexities: Women in Mathematics. Princeton University Press. p. 97. ISBN 0-691-11462-5. 4. "Educational Awards: Ruth Satter". Association for Women in Science. Archived from the original on January 2, 2013. Retrieved September 14, 2017. 5. "2017 Ruth Lyttle Satter Prize" (PDF). Notices of the AMS. American Mathematical Society. 64 (4): 316. April 2017. 6. "2003 Satter Prize" (PDF). Notices of the AMS. American Mathematical Society. 50 (4): 474. April 2003. 7. "2001 Ruth Lyttle Satter Prize" (PDF). Notices of the AMS. American Mathematical Society. 48 (4): 411–12. April 2001. 8. "1997 Satter Prize" (PDF). Notices of the AMS. American Mathematical Society. 44 (3): 348. March 1997. 9. "1995 Satter Prize" (PDF). Notices of the AMS. American Mathematical Society. 42 (4): 459. April 1995. 10. Morrow, Charlene; Peri, Teri, eds. (1998). Notable Women in Mathematics: A Biographical Dictionary. Westport, Connecticut: Greenwood Press. p. 140. ISBN 0-313-29131-4. 11. "Reclusive Russian turns down math world's highest honour". Canadian Broadcasting Corporation. AP. August 22, 2006. Retrieved September 13, 2017. 12. "Maryam Mirzakhani, first woman to win maths' Fields Medal, dies". BBC News. July 15, 2017. Retrieved September 13, 2017. 13. "Maryam Mirzakhani, First Woman and Iranian to Win Fields Medal, Dies at 40". The Wire. July 15, 2017. Retrieved September 13, 2017. 14. "2021 Ruth Lyttle Satter Prize" (PDF). Notices of the American Mathematical Society. 68: 626–627. 15. "AAS Committee on the Status of Women". AASWOMEN. January 2004. Retrieved September 14, 2017. 16. Austin, Ruth, ed. (1996). The Grants Register 1997. New York: Macmillan Press. p. 189. ISBN 978-0-312-15898-9. 17. "Prizes and Awards". American Mathematical Society. Retrieved September 13, 2017. Ruth Lyttle Satter Prize in Mathematics recipients • 1991 Dusa McDuff • 1993 Lai-Sang Young • 1995 Sun-Yung Alice Chang • 1997 Ingrid Daubechies • 1999 Bernadette Perrin-Riou • 2001 Karen E. Smith & Sijue Wu • 2003 Abigail Thompson • 2005 Svetlana Jitomirskaya • 2007 Claire Voisin • 2009 Laure Saint-Raymond • 2011 Amie Wilkinson • 2013 Maryam Mirzakhani • 2015 Hee Oh • 2017 Laura DeMarco • 2019 Maryna Viazovska • 2021 Kaisa Matomäki • 2023 Panagiota Daskalopoulos & Nataša Šešum
Wikipedia
Saturated array In experiments in which additional factors are not likely to interact with any of the other factors, a saturated array can be used. In a saturated array, a controllable factor is substituted for the interaction of two or more by-products. Using a saturated array, a two-factor test matrix could be used to test three factors. Using the saturated array allows three factors to be tested in four tests rather than in eight, as would be required by a standard orthogonal array.
Wikipedia
Saturated family In mathematics, specifically in functional analysis, a family ${\mathcal {G}}$ of subsets a topological vector space (TVS) $X$ is said to be saturated if ${\mathcal {G}}$ contains a non-empty subset of $X$ and if for every $G\in {\mathcal {G}},$ the following conditions all hold: 1. ${\mathcal {G}}$ contains every subset of $G$; 2. the union of any finite collection of elements of ${\mathcal {G}}$ is an element of ${\mathcal {G}}$; 3. for every scalar $a,$ ${\mathcal {G}}$ contains $aG$; 4. the closed convex balanced hull of $G$ belongs to ${\mathcal {G}}.$[1] Definitions If ${\mathcal {S}}$ is any collection of subsets of $X$ then the smallest saturated family containing ${\mathcal {S}}$ is called the saturated hull of ${\mathcal {S}}.$[1] The family ${\mathcal {G}}$ is said to cover $X$ if the union $\bigcup _{G\in {\mathcal {G}}}G$ is equal to $X$; it is total if the linear span of this set is a dense subset of $X.$[1] Examples The intersection of an arbitrary family of saturated families is a saturated family.[1] Since the power set of $X$ is saturated, any given non-empty family ${\mathcal {G}}$ of subsets of $X$ containing at least one non-empty set, the saturated hull of ${\mathcal {G}}$ is well-defined.[2] Note that a saturated family of subsets of $X$ that covers $X$ is a bornology on $X.$ The set of all bounded subsets of a topological vector space is a saturated family. See also • Topology of uniform convergence • Topological vector lattice • Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets References 1. Schaefer & Wolff 1999, pp. 79–82. 2. Schaefer & Wolff 1999, pp. 79–88. • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Duality and spaces of linear maps Basic concepts • Dual space • Dual system • Dual topology • Duality • Operator topologies • Polar set • Polar topology • Topologies on spaces of linear maps Topologies • Norm topology • Dual norm • Ultraweak/Weak-* • Weak • polar • operator • in Hilbert spaces • Mackey • Strong dual • polar topology • operator • Ultrastrong Main results • Banach–Alaoglu • Mackey–Arens Maps • Transpose of a linear map Subsets • Saturated family • Total set Other concepts • Biorthogonal system Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons Boundedness and bornology Basic concepts • Barrelled space • Bounded set • Bornological space • (Vector) Bornology Operators • (Un)Bounded operator • Uniform boundedness principle Subsets • Barrelled set • Bornivorous set • Saturated family Related spaces • (Countably) Barrelled space • (Countably) Quasi-barrelled space • Infrabarrelled space • (Quasi-) Ultrabarrelled space • Ultrabornological space Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Saturated measure In mathematics, a measure is said to be saturated if every locally measurable set is also measurable.[1] A set $E$, not necessarily measurable, is said to be a locally measurable set if for every measurable set $A$ of finite measure, $E\cap A$ is measurable. $\sigma $-finite measures and measures arising as the restriction of outer measures are saturated. References 1. Bogachev, Vladmir (2007). Measure Theory Volume 2. Springer. ISBN 978-3-540-34513-8. Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
Saturated model In mathematical logic, and particularly in its subfield model theory, a saturated model M is one that realizes as many complete types as may be "reasonably expected" given its size. For example, an ultrapower model of the hyperreals is $\aleph _{1}$-saturated, meaning that every descending nested sequence of internal sets has a nonempty intersection.[1] Definition Let κ be a finite or infinite cardinal number and M a model in some first-order language. Then M is called κ-saturated if for all subsets A ⊆ M of cardinality less than κ, the model M realizes all complete types over A. The model M is called saturated if it is |M|-saturated where |M| denotes the cardinality of M. That is, it realizes all complete types over sets of parameters of size less than |M|. According to some authors, a model M is called countably saturated if it is $\aleph _{1}$-saturated; that is, it realizes all complete types over countable sets of parameters.[2] According to others, it is countably saturated if it is countable and saturated.[3] Motivation The seemingly more intuitive notion—that all complete types of the language are realized—turns out to be too weak (and is appropriately named weak saturation, which is the same as 1-saturation). The difference lies in the fact that many structures contain elements that are not definable (for example, any transcendental element of R is, by definition of the word, not definable in the language of fields). However, they still form a part of the structure, so we need types to describe relationships with them. Thus we allow sets of parameters from the structure in our definition of types. This argument allows us to discuss specific features of the model that we may otherwise miss—for example, a bound on a specific increasing sequence cn can be expressed as realizing the type {x ≥ cn : n ∈ ω}, which uses countably many parameters. If the sequence is not definable, this fact about the structure cannot be described using the base language, so a weakly saturated structure may not bound the sequence, while an ℵ1-saturated structure will. The reason we only require parameter sets that are strictly smaller than the model is trivial: without this restriction, no infinite model is saturated. Consider a model M, and the type {x ≠ m : m ∈ M}. Each finite subset of this type is realized in the (infinite) model M, so by compactness it is consistent with M, but is trivially not realized. Any definition that is universally unsatisfied is useless; hence the restriction. Examples Saturated models exist for certain theories and cardinalities: • (Q, <)—the set of rational numbers with their usual ordering—is saturated. Intuitively, this is because any type consistent with the theory is implied by the order type; that is, the order the variables come in tells you everything there is to know about their role in the structure. • (R, <)—the set of real numbers with their usual ordering—is not saturated. For example, take the type (in one variable x) that contains the formula $\textstyle {x>-{\frac {1}{n}}}$ for every natural number n, as well as the formula $\textstyle {x<0}$. This type uses ω different parameters from R. Every finite subset of the type is realized on R by some real x, so by compactness the type is consistent with the structure, but it is not realized, as that would imply an upper bound to the sequence −1/n that is less than 0 (its least upper bound). Thus (R,<) is not ω1-saturated, and not saturated. However, it is ω-saturated, for essentially the same reason as Q—every finite type is given by the order type, which if consistent, is always realized, because of the density of the order. • A dense totally ordered set without endpoints is a ηα set if and only if it is ℵα-saturated. • The countable random graph, with the only non-logical symbol being the edge existence relation, is also saturated, because any complete type is isolated (implied) by the finite subgraph consisting of the variables and parameters used to define the type. Both the theory of Q and the theory of the countable random graph can be shown to be ω-categorical through the back-and-forth method. This can be generalized as follows: the unique model of cardinality κ of a countable κ-categorical theory is saturated. However, the statement that every model has a saturated elementary extension is not provable in ZFC. In fact, this statement is equivalent to the existence of a proper class of cardinals κ such that κ<κ = κ. The latter identity is equivalent to κ = λ+ = 2λ for some λ, or κ is strongly inaccessible. Relationship to prime models The notion of saturated model is dual to the notion of prime model in the following way: let T be a countable theory in a first-order language (that is, a set of mutually consistent sentences in that language) and let P be a prime model of T. Then P admits an elementary embedding into any other model of T. The equivalent notion for saturated models is that any "reasonably small" model of T is elementarily embedded in a saturated model, where "reasonably small" means cardinality no larger than that of the model in which it is to be embedded. Any saturated model is also homogeneous. However, while for countable theories there is a unique prime model, saturated models are necessarily specific to a particular cardinality. Given certain set-theoretic assumptions, saturated models (albeit of very large cardinality) exist for arbitrary theories. For λ-stable theories, saturated models of cardinality λ exist. Notes 1. Goldblatt 1998 2. Morley, Michael (1963). "On theories categorical in uncountable powers". Proceedings of the National Academy of Sciences of the United States of America. 49 (2): 213–216. Bibcode:1963PNAS...49..213M. doi:10.1073/pnas.49.2.213. PMC 299780. PMID 16591050. 3. Chang and Keisler 1990 References • Chang, C. C.; Keisler, H. J. Model theory. Third edition. Studies in Logic and the Foundations of Mathematics, 73. North-Holland Publishing Co., Amsterdam, 1990. xvi+650 pp. ISBN 0-444-88054-2 • R. Goldblatt (1998). Lectures on the hyperreals. An introduction to nonstandard analysis. Springer. • Marker, David (2002). Model Theory: An Introduction. New York: Springer-Verlag. ISBN 0-387-98760-6 • Poizat, Bruno; (translation: Klein, Moses) (2000), A Course in Model Theory, New York: Springer-Verlag. ISBN 0-387-98655-3 • Sacks, Gerald E. (1972), Saturated model theory, W. A. Benjamin, Inc., Reading, Mass., MR 0398817 Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Saturated set In mathematics, particularly in the subfields of set theory and topology, a set $C$ is said to be saturated with respect to a function $f:X\to Y$ if $C$ is a subset of $f$'s domain $X$ and if whenever $f$ sends two points $c\in C$ and $x\in X$ to the same value then $x$ belongs to $C$ (that is, if $f(x)=f(c)$ then $x\in C$). Said more succinctly, the set $C$ is called saturated if $C=f^{-1}(f(C)).$ In topology, a subset of a topological space $(X,\tau )$ is saturated if it is equal to an intersection of open subsets of $X.$ In a T1 space every set is saturated. Definition Preliminaries Let $f:X\to Y$ be a map. Given any subset $S\subseteq X,$ define its image under $f$ to be the set: $f(S):=\{f(s)~:~s\in S\}$ and define its preimage or inverse image under $f$ to be the set: $f^{-1}(S):=\{x\in X~:~f(x)\in S\}.$ Given $y\in Y,$ the fiber of $f$ over $y$ is defined to be the preimage: $f^{-1}(y):=f^{-1}(\{y\})=\{x\in X~:~f(x)=y\}.$ Any preimage of a single point in $f$'s codomain $Y$ is referred to as a fiber of $f.$ Saturated sets A set $C$ is called $f$-saturated and is said to be saturated with respect to $f$ if $C$ is a subset of $f$'s domain $X$ and if any of the following equivalent conditions are satisfied:[1] 1. $C=f^{-1}(f(C)).$ 2. There exists a set $S$ such that $C=f^{-1}(S).$ • Any such set $S$ necessarily contains $f(C)$ as a subset and moreover, it will also necessarily satisfy the equality $f(C)=S\cap \operatorname {Im} f,$ where $\operatorname {Im} f:=f(X)$ denotes the image of $f.$ 3. If $c\in C$ and $x\in X$ satisfy $f(x)=f(c),$ then $x\in C.$ 4. If $y\in Y$ is such that the fiber $f^{-1}(y)$ intersects $C$ (that is, if $f^{-1}(y)\cap C\neq \varnothing $), then this entire fiber is necessarily a subset of $C$ (that is, $f^{-1}(y)\subseteq C$). 5. For every $y\in Y,$ the intersection $C\cap f^{-1}(y)$ is equal to the empty set $\varnothing $ or to $f^{-1}(y).$ Examples Let $f:X\to Y$ be any function. If $S$ is any set then its preimage $C:=f^{-1}(S)$ under $f$ is necessarily an $f$-saturated set. In particular, every fiber of a map $f$ is an $f$-saturated set. The empty set $\varnothing =f^{-1}(\varnothing )$ and the domain $X=f^{-1}(Y)$ are always saturated. Arbitrary unions of saturated sets are saturated, as are arbitrary intersections of saturated sets. Properties Let $S$ and $T$ be any sets and let $f:X\to Y$ be any function. If $S$ or $T$ is $f$-saturated then $f(S\cap T)~=~f(S)\cap f(T).$ If $T$ is $f$-saturated then $f(S\setminus T)~=~f(S)\setminus f(T)$ where note, in particular, that no requirements or conditions were placed on the set $S.$ If $\tau $ is a topology on $X$ and $f:X\to Y$ is any map then set $\tau _{f}$ of all $U\in \tau $ that are saturated subsets of $X$ forms a topology on $X.$ If $Y$ is also a topological space then $f:(X,\tau )\to Y$ is continuous (respectively, a quotient map) if and only if the same is true of $f:\left(X,\tau _{f}\right)\to Y.$ See also • List of set identities and relations – Equalities for combinations of sets References 1. Monk 1969, pp. 24–54. • G. Gierz; K. H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove & D. S. Scott (2003). "Continuous Lattices and Domains". Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 0-521-80338-1. • Monk, James Donald (1969). Introduction to Set Theory (PDF). International series in pure and applied mathematics. New York: McGraw-Hill. ISBN 978-0-07-042715-0. OCLC 1102. • Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Sauer–Shelah lemma In combinatorial mathematics and extremal set theory, the Sauer–Shelah lemma states that every family of sets with small VC dimension consists of a small number of sets. It is named after Norbert Sauer and Saharon Shelah, who published it independently of each other in 1972.[1][2] The same result was also published slightly earlier and again independently, by Vladimir Vapnik and Alexey Chervonenkis, after whom the VC dimension is named.[3] In his paper containing the lemma, Shelah gives credit also to Micha Perles,[2] and for this reason the lemma has also been called the Perles–Sauer–Shelah lemma.[4] Buzaglo et al. call this lemma "one of the most fundamental results on VC-dimension",[4] and it has applications in many areas. Sauer's motivation was in the combinatorics of set systems, while Shelah's was in model theory and that of Vapnik and Chervonenkis was in statistics. It has also been applied in discrete geometry[5] and graph theory.[6] Definitions and statement If $\textstyle {\mathcal {F}}=\{S_{1},S_{2},\dots \}$ is a family of sets and $T$ is a set, then $T$ is said to be shattered by ${\mathcal {F}}$ if every subset of $T$ (including the empty set and $T$ itself) can be obtained as the intersection $T\cap S_{i}$ of $T$ with some set $S_{i}$ in the family. The VC dimension of ${\mathcal {F}}$ is the largest cardinality of a set shattered by ${\mathcal {F}}$. In terms of these definitions, the Sauer–Shelah lemma states that if ${\mathcal {F}}$ is a family of sets, the union of ${\mathcal {F}}$ has $n$ elements, and $|{\mathcal {F}}|>\sum _{i=0}^{k-1}{\binom {n}{i}},$ then ${\mathcal {F}}$ shatters a set of size $k$. Equivalently, if the VC dimension of ${\mathcal {F}}$ is $k,$ then ${\mathcal {F}}$ can consist of at most $\sum _{i=0}^{k}{\binom {n}{i}}=O(n^{k})$ sets, as expressed using big O notation. The bound of the lemma is tight: Let the family ${\mathcal {F}}$ be composed of all subsets of $\{1,2,\dots ,n\}$ with size less than $k$. Then the size of ${\mathcal {F}}$ is exactly $ \sum _{i=0}^{k-1}{\binom {n}{i}}$ but it does not shatter any set of size $k$.[7] The number of shattered sets A strengthening of the Sauer–Shelah lemma, due to Pajor (1985), states that every finite set family ${\mathcal {F}}$ shatters at least $|{\mathcal {F}}|$ sets.[8] This immediately implies the Sauer–Shelah lemma, because only $ \sum _{i=0}^{k-1}{\tbinom {n}{i}}$ of the subsets of an $n$-item universe have cardinality less than $k$. Thus, when $|{\mathcal {F}}|>\sum _{i=0}^{k-1}{\tbinom {n}{i}},$ there are not enough small sets to be shattered, so one of the shattered sets must have cardinality at least $k$. For a restricted type of shattered set, called an order-shattered set, the number of shattered sets always equals the cardinality of the set family.[9] Proof Pajor's variant of the Sauer–Shelah lemma may be proved by mathematical induction; the proof has variously been credited to Noga Alon[10] or to Ron Aharoni and Ron Holzman.[9] Base Every family of only one set shatters the empty set. Step Assume the lemma is true for all families of size less than $|{\mathcal {F}}|$ and let ${\mathcal {F}}$ be a family of two or more sets. Let $x$ be an element that belongs to some but not all of the sets in ${\mathcal {F}}$. Split ${\mathcal {F}}$ into two subfamilies, of the sets that contain $x$ and the sets that do not contain $x$. By the induction assumption, these two subfamilies shatter two collections of sets whose sizes add to at least $|{\mathcal {F}}|$. None of these shattered sets contain $x$, since a set that contains $x$ cannot be shattered by a family in which all sets contain $x$ or all sets do not contain $x$. Some of the shattered sets may be shattered by both subfamilies. When a set $S$ is shattered by only one of the two subfamilies, it contributes one unit both to the number of shattered sets of the subfamily and to the number of shattered sets of ${\mathcal {F}}$. When a set $S$ is shattered by both subfamilies, both $S$ and $S\cup \{x\}$ are shattered by ${\mathcal {F}}$, so $S$ contributes two units to the number of shattered sets of the subfamilies and of ${\mathcal {F}}$. Therefore, the number of shattered sets of ${\mathcal {F}}$ is at least equal to the number shattered by the two subfamilies of ${\mathcal {F}}$, which is at least $|{\mathcal {F}}|$. A different proof of the Sauer–Shelah lemma in its original form, by Péter Frankl and János Pach, is based on linear algebra and the inclusion–exclusion principle.[5][7] Applications The original application of the lemma, by Vapnik and Chervonenkis, was in showing that every probability distribution can be approximated (with respect to a family of events of a given VC dimension) by a finite set of sample points whose cardinality depends only on the VC dimension of the family of events. In this context, there are two important notions of approximation, both parameterized by a number $\varepsilon $: a set $S$ of samples, and a probability distribution on $S$, is said to be an $\varepsilon $-approximation of the original distribution if the probability of each event with respect to $S$ differs from its original probability by at most $\varepsilon $. A set $S$ of (unweighted) samples is said to be an $\varepsilon $-net if every event with probability at least $\varepsilon $ includes at least one point of $S$. An $\varepsilon $-approximation must also be an $\varepsilon $-net but not necessarily vice versa. Vapnik and Chervonenkis used the lemma to show that set systems of VC dimension $d$ always have $\varepsilon $-approximations of cardinality $O({\tfrac {d}{\varepsilon ^{2}}}\log {\tfrac {d}{\varepsilon }}).$ Later authors including Haussler & Welzl (1987)[11] and Komlós, Pach & Woeginger (1992)[12] similarly showed that there always exist $\varepsilon $-nets of cardinality $O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})$, and more precisely of cardinality at most[5] ${\tfrac {d}{\varepsilon }}\ln {\tfrac {1}{\varepsilon }}+{\tfrac {2d}{\varepsilon }}\ln \ln {\tfrac {1}{\varepsilon }}+{\tfrac {6d}{\varepsilon }}.$ The main idea of the proof of the existence of small $\varepsilon $-nets is to choose a random sample $x$ of cardinality $ O({\tfrac {d}{\varepsilon }}\log {\tfrac {1}{\varepsilon }})$ and a second independent random sample $y$ of cardinality $ O({\tfrac {d}{\varepsilon }}\log ^{2}{\tfrac {1}{\varepsilon }})$, and to bound the probability that $x$ is missed by some large event $E$ by the probability that $x$ is missed and simultaneously the intersection of $y$ with $E$ is larger than its median value. For any particular $E$, the probability that $x$ is missed while $y$ is larger than its median is very small, and the Sauer–Shelah lemma (applied to $x\cup y$) shows that only a small number of distinct events $E$ need to be considered, so by the union bound, with nonzero probability, $x$ is an $\varepsilon $-net.[5] In turn, $\varepsilon $-nets and $\varepsilon $-approximations, and the likelihood that a random sample of large enough cardinality has these properties, have important applications in machine learning, in the area of probably approximately correct learning.[13] In computational geometry, they have been applied to range searching,[11] derandomization,[14] and approximation algorithms.[15][16] Kozma & Moran (2013) use generalizations of the Sauer–Shelah lemma to prove results in graph theory such as that the number of strong orientations of a given graph is sandwiched between its numbers of connected and 2-edge-connected subgraphs.[6] See also • Growth function References 1. Sauer, N. (1972), "On the density of families of sets", Journal of Combinatorial Theory, Series A, 13: 145–147, doi:10.1016/0097-3165(72)90019-2, MR 0307902. 2. Shelah, Saharon (1972), "A combinatorial problem; stability and order for models and theories in infinitary languages", Pacific Journal of Mathematics, 41: 247–261, doi:10.2140/pjm.1972.41.247, MR 0307903, archived from the original on 2013-10-05. 3. Vapnik, V. N.; Červonenkis, A. Ja. (1971), "The uniform convergence of frequencies of the appearance of events to their probabilities", Akademija Nauk SSSR, 16: 264–279, MR 0288823. 4. Buzaglo, Sarit; Pinchasi, Rom; Rote, Günter (2013), "Topological hypergraphs", in Pach, János (ed.), Thirty Essays on Geometric Graph Theory, Springer, pp. 71–81, doi:10.1007/978-1-4614-0110-0_6. 5. Pach, János; Agarwal, Pankaj K. (1995), Combinatorial geometry, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York: John Wiley & Sons Inc., p. 247, doi:10.1002/9781118033203, ISBN 0-471-58890-3, MR 1354145. 6. Kozma, László; Moran, Shay (2013), "Shattering, Graph Orientations, and Connectivity", Electronic Journal of Combinatorics, 20 (3), P44, arXiv:1211.1319, Bibcode:2012arXiv1211.1319K, MR 3118952. 7. Gowers, Timothy (July 31, 2008), "Dimension arguments in combinatorics", Gowers's Weblog: Mathematics related discussions, Example 3. 8. Pajor, Alain (1985), Sous-espaces $l_{1}^{n}$ des espaces de Banach, Travaux en Cours [Works in Progress], vol. 16, Paris: Hermann, ISBN 2-7056-6021-6, MR 0903247. As cited by Anstee, Rónyai & Sali (2002). 9. Anstee, R. P.; Rónyai, Lajos; Sali, Attila (2002), "Shattering news", Graphs and Combinatorics, 18 (1): 59–73, doi:10.1007/s003730200003, MR 1892434. 10. Kalai, Gil (September 28, 2008), "Extremal Combinatorics III: Some Basic Theorems", Combinatorics and More. 11. Haussler, David; Welzl, Emo (1987), "$\varepsilon $-nets and simplex range queries", Discrete and Computational Geometry, 2 (2): 127–151, doi:10.1007/BF02187876, MR 0884223. 12. Komlós, János; Pach, János; Woeginger, Gerhard (1992), "Almost tight bounds for $\varepsilon $-nets", Discrete and Computational Geometry, 7 (2): 163–173, doi:10.1007/BF02187833, MR 1139078. 13. Blumer, Anselm; Ehrenfeucht, Andrzej; Haussler, David; Warmuth, Manfred K. (1989), "Learnability and the Vapnik–Chervonenkis dimension", Journal of the ACM, 36 (4): 929–965, doi:10.1145/76359.76371, MR 1072253. 14. Chazelle, B.; Friedman, J. (1990), "A deterministic view of random sampling and its use in geometry", Combinatorica, 10 (3): 229–249, doi:10.1007/BF02122778, MR 1092541. 15. Brönnimann, H.; Goodrich, M. T. (1995), "Almost optimal set covers in finite VC-dimension", Discrete and Computational Geometry, 14 (4): 463–479, doi:10.1007/BF02570718, MR 1360948. 16. Har-Peled, Sariel (2011), "On complexity, sampling, and $\varepsilon $-nets and $\varepsilon $-samples", Geometric approximation algorithms, Mathematical Surveys and Monographs, vol. 173, Providence, RI: American Mathematical Society, pp. 61–85, ISBN 978-0-8218-4911-8, MR 2760023.
Wikipedia